Certificate authority and access plane for SSH, Kubernetes, web applications, and databases

Teleport

Teleport is an identity-aware, multi-protocol access proxy which understands SSH, HTTPS, Kubernetes API, MySQL and PostgreSQL wire protocols.

On a server side, Teleport is a single binary which enables convenient secure access to behind-NAT resources such as:

Teleport is trivial to setup as a Linux daemon or in a Kubernetes pod and it's rapidly replacing legacy sshd based setups at organizations who need:

  • Developer convenience of having instant secure access to everything they need across many environments and cloud providers.
  • Audit log with session recording/replay for multiple protocols
  • Easily manage trust between teams, organizations and data centers.
  • Role-based access control (RBAC) and flexible access workflows (one-time access requests)

In addition to its hallmark features, Teleport is interesting for smaller teams because it facilitates easy adoption of the best infrastructure security practices like:

  • No need to manage shared secrets such as SSH keys: Teleport uses certificate-based access with automatic certificate expiration time for all protocols.
  • 2nd factor authentication (2FA) for everything.
  • Collaboratively troubleshoot issues through session sharing.
  • Single sign-on (SSO) for everything via Github Auth, OpenID Connect or SAML with endpoints like Okta or Active Directory.
  • Infrastructure introspection: every SSH node, database instance, Kubernetes cluster or an internal web app and its status can be queried via CLI and Web UI.

Teleport is built on top of the high-quality Golang SSH implementation and it is fully compatible with OpenSSH and can be used with sshd servers and ssh clients.

Project Links Description
Teleport Website The official website of the project.
Documentation Admin guide, user manual and more.
Demo Video 5-minute video overview of the UI.
Teleconsole The free service to "invite" SSH clients behind NAT, built on top of Teleport.
Blog Our blog where we publish Teleport news.
Forum Ask us a setup question, post your tutorial, feedback or idea on our forum.
Slack Need help with set-up? Ping us in Slack channel.

Teleport 6.0 - 4:00m Demo Video

Installing and Running

Download the latest binary release, unpack the .tar.gz and run sudo ./install. This will copy Teleport binaries into /usr/local/bin.

Then you can run Teleport as a single-node cluster:

$ sudo teleport start

In a production environment Teleport must run as root. But to play, just do chown $USER /var/lib/teleport and run it under $USER, in this case you will not be able to login as someone else though.

Docker

Deploy Teleport

If you wish to deploy Teleport inside a Docker container:

# This command will pull the Teleport container image for version 6
$ docker pull quay.io/gravitational/teleport:6

View latest tags on Quay.io | gravitational/teleport

For Local Testing and Development

Follow instructions at docker/README

Building Teleport

Teleport source code consists of the actual Teleport daemon binary written in Golang, and also of a web UI (a git submodule located in /webassets directory) written in Javascript.

Make sure you have Golang v1.15 or newer, then run:

# get the source & build:
$ git clone https://github.com/gravitational/teleport.git
$ cd teleport
$ make full

# create the default data directory before starting:
$ sudo mkdir -p -m0700 /var/lib/teleport
$ sudo chown $USER /var/lib/teleport

If the build succeeds the binaries will be placed in $GOPATH/src/github.com/gravitational/teleport/build

NOTE: The Go compiler is somewhat sensitive to amount of memory: you will need at least 1GB of virtual memory to compile Teleport. 512MB instance without swap will not work.

NOTE: This will build the latest version of Teleport, regardless of whether it is stable. If you want to build the latest stable release, git checkout to that tag (e.g. git checkout v6.0.0) before running make full.

Web UI

Teleport Web UI is located in the Gravitational Webapps repo.

Rebuilding Web UI for development

You can clone that repository and rebuild teleport UI package with:

$ git clone [email protected]:gravitational/webapps.git
$ cd webapps
$ make build-teleport

Then you can replace Teleport Web UI files with the one found in the generated /dist folder.

To enable speedy iterations on the Web UI, you can run a local web-dev server.

You can also tell teleport to load the Web UI assets from the source directory. To enable this behavior, set the environment variable DEBUG=1 and rebuild with the default target:

# Run Teleport as a single-node cluster in development mode:
$ DEBUG=1 ./build/teleport start -d

Keep the server running in this mode, and make your UI changes in /dist directory. Refer to the webapps README for instructions on how to update the Web UI.

Updating Web UI assets

After you commit a change to the webapps repo, you need to update the Web UI assets in the webassets/ git submodule.

Use make update-webassets to update the webassets repo and create a PR for teleport to update its git submodule.

You will need to have the gh utility installed on your system for the script to work. You can download it from https://github.com/cli/cli/releases/latest

Updating Documentation

TL;DR version:

make docs
make run-docs

For more details, take a look at docs/README

Managing dependencies

Dependencies are managed using Go modules. Here are instructions for some common tasks:

Add a new dependency

Latest version:

go get github.com/new/dependency
# Update the source to actually use this dependency, then run:
make update-vendor

Specific version:

go get github.com/new/dependency@version
# Update the source to actually use this dependency, then run:
make update-vendor

Set dependency to a specific version

go get github.com/new/dependency@version
make update-vendor

Update dependency to the latest version

go get -u github.com/new/dependency
make update-vendor

Update all dependencies

go get -u all
make update-vendor

Debugging dependencies

Why is a specific package imported: go mod why $pkgname.

Why is a specific module imported: go mod why -m $modname.

Why is a specific version of a module imported: go mod graph | grep $modname.

Why did We Build Teleport?

The Teleport creators used to work together at Rackspace. We noticed that most cloud computing users struggle with setting up and configuring infrastructure security because popular tools, while flexible, are complex to understand and expensive to maintain. Additionally, most organizations use multiple infrastructure form factors such as several cloud providers, multiple cloud accounts, servers in colocation, and even smart devices. Some of those devices run on untrusted networks, behind third party firewalls. This only magnifies complexity and increases operational overhead.

We had a choice, either to start a security consulting business or build a solution thatโ€™s dead-easy to use and understand, something that creates an illusion of all of your servers being in the same room as you as if they were magically teleported. And Teleport was born!

More Information

Support and Contributing

We offer a few different options for support. First of all, we try to provide clear and comprehensive documentation. The docs are also in Github, so feel free to create a PR or file an issue if you think improvements can be made. If you still have questions after reviewing our docs, you can also:

  • Join Teleport Discussions to ask questions. Our engineers are available there to help you.
  • If you want to contribute to Teleport or file a bug report/issue, you can do so by creating an issue here in Github.
  • If you are interested in Teleport Enterprise or more responsive support during a POC, we can also create a dedicated Slack channel for you during your POC. You can reach out to us through our website to arrange for a POC.

Is Teleport Secure and Production Ready?

Teleport has completed several security audits from the nationally recognized technology security companies. Some of them have been made public. We are comfortable with the use of Teleport from a security perspective.

You can see the list of companies who use Teleport in production on the Teleport product page.

However, Teleport is still a relatively young product so you may experience usability issues. We are actively supporting Teleport and addressing any issues that are submitted to this repo. Ask questions, send pull requests, report issues and don't be shy! :)

The latest stable Teleport build can be found in Releases

Who Built Teleport?

Teleport was created by Gravitational Inc. We have built Teleport by borrowing from our previous experiences at Rackspace. It has been extracted from Gravity, our Kubernetes distribution optimized for deploying and remotely controlling complex applications into multiple environments at the same time:

  • Multiple cloud regions
  • Colocation
  • Private enterprise clouds located behind firewalls
Owner
Teleport
Unify access for SSH servers, Kubernetes clusters, web applications, and databases.
Teleport
Comments
  • Flaky Test Tracker

    Flaky Test Tracker

    Investigating

    Process

    1. Start with the assumption the test is correct and is highlighting a bug in Teleport.
    2. Run multiple parallel unit or integration tests to reproduce.
    3. Attempt to fix the test.
    4. Propose quarantine.

    Unit Tests

    • ~Frequently fails. github.com/gravitational/teleport/lib/service.TestTeleportProcess_reconnectToAuth~
    • ~Frequently fails. github.com/gravitational/teleport/lib/srv/regular.TestClientDisconnect~
    • ~Frequently fails. github.com/gravitational/teleport/lib/cache.TestCache_Backoff~
    • ~github.com/gravitational/teleport/lib/srv/regular.TestProxyReverseTunnel~
    • ~github.com/gravitational/teleport/lib/auth.TestAPILockedOut~
    • github.com/gravitational/teleport/lib/auth.TestAPI
    • Often fails locally github.com/gravitational/teleport/lib/auth.TestTiming (also reported in #4653)

    Integration

    Metrics

    Trailing 7-day pass rate for unit and integration tests.

    • Week of January 10th. Unit 67%, Integration 56%

    Proposed for Quarantine

    This section is for tests that provide business value but are inherently flaky due to a dependence on time and an external resource (like CPU or network). For example, a test that waits for an event to occur and times out if the event does not occur after some time.

    Quarantined tests will be triaged by @russjones weekly and potentially serialized and put into a retry loop.

    • https://github.com/gravitational/teleport/issues/9491
    • lib/auth.PasswordSuite.TestTiming requires exists/not exists tests to be within 10% of eachother

    Fixed

    • https://github.com/gravitational/teleport/pull/9316
    • https://github.com/gravitational/teleport/pull/9326
    • https://github.com/gravitational/teleport/pull/9119
    • https://github.com/gravitational/teleport/pull/9118
    • https://github.com/gravitational/teleport/pull/9117
    • https://github.com/gravitational/teleport/pull/8888
    • https://github.com/gravitational/teleport/pull/8643
    • https://github.com/gravitational/teleport/pull/8608
    • https://github.com/gravitational/teleport/pull/9316
    • https://github.com/gravitational/teleport/pull/8744
  • Added YUM implementation of OS package build tool

    Added YUM implementation of OS package build tool

    This PR has several inter-dependent changes:

    • Renamed the "build-apt-repos" to "build-os-package-repos"
    • Rewrote the tool to use subcommands (i.e. from go run . <args> to go run . <apt/yum> <args>)
    • Implemented YUM repo building per rfd/0058-package-distribution.md
    • Added os-specific "*.repo" files for use with dnf config-manager and yum-config-manager
    • Added redirects on APT and YUM buckets from index.html to https://goteleport.com/docs/installation/
    • Refactored dronegen to more easily support adding OS package repos to the promotion pipeline
    • Added YUM repo building to dronegen
    • Added dronegen parallelism support
    • Added dronegen resource limit support

    Before this PR is merged I need to run migrations for old Teleport versions to populate the prod bucket with old release artifacts. Additionally much of it's functionality is dependent on https://github.com/gravitational/cloud-terraform/pull/701 being merged first.

    This PR is ready for code review but not yet ready for merge.

  • Teleport 10 Test Plan

    Teleport 10 Test Plan

    Manual Testing Plan

    Below are the items that should be manually tested with each release of Teleport. These tests should be run on both a fresh install of the version to be released as well as an upgrade of the previous version of Teleport.

    • [x] Adding nodes to a cluster @avatus

      • [x] Adding Nodes via Valid Static Token
      • [x] Adding Nodes via Valid Short-lived Tokens
      • [x] Adding Nodes via Invalid Token Fails
      • [x] Revoking Node Invitation
    • [x] Labels @avatus

      • [x] Static Labels
      • [x] Dynamic Labels
    • [x] Trusted Clusters @EdwardDowling @hugoShaka

      • [x] Adding Trusted Cluster Valid Static Token
      • [x] Adding Trusted Cluster Valid Short-lived Token
      • [x] Adding Trusted Cluster Invalid Token
      • [x] Removing Trusted Cluster
    • [x] RBAC @alistanis

      Make sure that invalid and valid attempts are reflected in audit log.

      • [x] Successfully connect to node with correct role
      • [x] Unsuccessfully connect to a node in a role restricting access by label
      • [x] Unsuccessfully connect to a node in a role restricting access by invalid SSH login
      • [x] Allow/deny role option: SSH agent forwarding
      • [x] Allow/deny role option: Port forwarding
    • [x] Verify that custom PAM environment variables are available as expected. @xacrimon

    • [x] Users @codingllama

      With every user combination, try to login and signup with invalid second factor, invalid password to see how the system reacts.

      WebAuthn in the release tsh binary is implemented using libfido2. Ask for a statically built pre-release binary for realistic tests. (tsh fido2 diag should work in our binary.)

      Touch ID requires a signed tsh, ask for a signed pre-release binary so you may run the tests.

      • [x] Adding Users Password Only

      • [x] Adding Users OTP

      • [x] Adding Users WebAuthn

      • [x] Adding Users Touch ID

      • [x] Managing MFA devices

        • [x] Add an OTP device with tsh mfa add
        • [x] Add a WebAuthn device with tsh mfa add
        • [x] Add a Touch ID device with tsh mfa add
        • [x] List MFA devices with tsh mfa ls
        • [x] Remove an OTP device with tsh mfa rm
        • [x] Remove a WebAuthn device with tsh mfa rm
        • [x] Attempt removing the last MFA device on the user
          • [x] with second_factor: on in auth_service, should fail
          • [x] with second_factor: optional in auth_service, should succeed
      • [x] Login Password Only

      • [x] Login with MFA

        • [x] Add an OTP, a WebAuthn and a Touch ID device with tsh mfa add
        • [x] Login via OTP
        • [x] Login via WebAuthn
        • [x] Login via Touch ID
        • [x] Login via WebAuthn using an U2F device

        U2F devices must be registered in a previous version of Teleport.

        Using Teleport v9, set auth_service.authentication.second_factor = u2f, restart the server and then register an U2F device (tsh mfa add). Upgrade the install to the current Teleport version (one major at a time) and try to login using the U2F device as your second factor - it should work.

      • [x] Login OIDC @Tener

      • [x] Login SAML @Tener

      • [x] Login GitHub @Tener

      • [x] Deleting Users @Tener

    • [x] Backends

      • [x] Teleport runs with etcd @EdwardDowling
      • [x] Teleport runs with dynamodb @xacrimon
      • [x] Teleport runs with SQLite @EdwardDowling
      • [x] Teleport runs with Firestore @xacrimon
    • [x] Session Recording @gabrielcorado

      • [x] Session recording can be disabled
      • [x] Sessions can be recorded at the node
        • [x] Sessions in remote clusters are recorded in remote clusters
      • [x] Sessions can be recorded at the proxy
        • [x] Sessions on remote clusters are recorded in the local cluster
        • [x] Enable/disable host key checking.
    • [x] Audit Log @gabrielcorado

      • [x] Failed login attempts are recorded

      • [x] Interactive sessions have the correct Server ID

        • [x] Server ID is the ID of the node in "session_recording: node" mode
        • [x] Server ID is the ID of the proxy in "session_recording: proxy" mode

        Node/Proxy ID may be found at /var/lib/teleport/host_uuid in the corresponding machine.

        Node IDs may also be queried via tctl nodes ls.

      • [x] Exec commands are recorded

      • [x] scp commands are recorded

      • [x] Subsystem results are recorded

        Subsystem testing may be achieved using both Recording Proxy mode and OpenSSH integration.

        Assuming the proxy is proxy.example.com:3023 and node1 is a node running OpenSSH/sshd, you may use the following command to trigger a subsystem audit log:

        sftp -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %[email protected] -s proxy:%h:%p" root@node1
        
    • [x] Interact with a cluster using tsh @alistanis @hugoShaka

      These commands should ideally be tested for recording and non-recording modes as they are implemented in a different ways.

      • [x] tsh ssh <regular-node>
      • [x] tsh ssh <node-remote-cluster>
      • [x] tsh ssh -A <regular-node>
      • [x] tsh ssh -A <node-remote-cluster>
      • [x] tsh ssh <regular-node> ls
      • [x] tsh ssh <node-remote-cluster> ls
      • [x] tsh join <regular-node>
      • [x] tsh join <node-remote-cluster>
      • [x] tsh play <regular-node>
      • [x] tsh play <node-remote-cluster>
      • [x] tsh scp <regular-node>
      • [x] tsh scp <node-remote-cluster>
      • [x] tsh ssh -L <regular-node>
      • [x] tsh ssh -L <node-remote-cluster>
      • [x] tsh ls
      • [x] tsh clusters
    • [x] Interact with a cluster using ssh @Joerger Make sure to test both recording and regular proxy modes.

      • [x] ssh <regular-node>
      • [x] ssh <node-remote-cluster>
      • [x] ssh -A <regular-node>
      • [x] ssh -A <node-remote-cluster>
      • [x] ssh <regular-node> ls
      • [x] ssh <node-remote-cluster> ls
      • [x] scp <regular-node>
      • [x] scp <node-remote-cluster>
      • [x] ssh -L <regular-node>
      • [x] ssh -L <node-remote-cluster>
    • [x] Verify proxy jump functionality @Joerger Log into leaf cluster via root, shut down the root proxy and verify proxy jump works.

      • [x] tsh ssh -J <leaf-proxy>
      • [x] ssh -J <leaf-proxy>
    • [x] Interact with a cluster using the Web UI @Joerger

      • [x] Connect to a Teleport node
      • [x] Connect to a OpenSSH node
      • [x] Check agent forwarding is correct based on role and proxy mode.

    User accounting @xacrimon

    • [x] Verify that active interactive sessions are tracked in /var/run/utmp on Linux.
    • [x] Verify that interactive sessions are logged in /var/log/wtmp on Linux.

    Combinations @capnspacehook

    For some manual testing, many combinations need to be tested. For example, for interactive sessions the 12 combinations are below.

    • [x] Connect to a OpenSSH node in a local cluster using OpenSSH.
    • [x] Connect to a OpenSSH node in a local cluster using Teleport.
    • [x] Connect to a OpenSSH node in a local cluster using the Web UI.
    • [x] Connect to a Teleport node in a local cluster using OpenSSH.
    • [x] Connect to a Teleport node in a local cluster using Teleport.
    • [x] Connect to a Teleport node in a local cluster using the Web UI.
    • [x] Connect to a OpenSSH node in a remote cl uster using OpenSSH.
    • [x] Connect to a OpenSSH node in a remote cluster using Teleport.
    • [x] Connect to a OpenSSH node in a remote cluster using the Web UI.
    • [x] Connect to a Teleport node in a remote cluster using OpenSSH.
    • [x] Connect to a Teleport node in a remote cluster using Teleport.
    • [x] Connect to a Teleport node in a remote cluster using the Web UI.

    Teleport with EKS/GKE @tigrato

    • [x] Deploy Teleport on a single EKS cluster
    • [x] Deploy Teleport on two EKS clusters and connect them via trusted cluster feature
    • [x] Deploy Teleport Proxy outside of GKE cluster fronting connections to it (use this script to generate a kubeconfig)
    • [x] Deploy Teleport Proxy outside of EKS cluster fronting connections to it (use this script to generate a kubeconfig)

    Teleport with multiple Kubernetes clusters @tigrato

    Note: you can use GKE or EKS or minikube to run Kubernetes clusters. Minikube is the only caveat - it's not reachable publicly so don't run a proxy there.

    • [x] Deploy combo auth/proxy/kubernetes_service outside of a Kubernetes cluster, using a kubeconfig
      • [x] Login with tsh login, check that tsh kube ls has your cluster
      • [x] Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh
      • [x] Verify that the audit log recorded the above request and session
    • [x] Deploy combo auth/proxy/kubernetes_service inside of a Kubernetes cluster
      • [x] Login with tsh login, check that tsh kube ls has your cluster
      • [x] Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh
      • [x] Verify that the audit log recorded the above request and session
    • [x] Deploy combo auth/proxy_service outside of the Kubernetes cluster and kubernetes_service inside of a Kubernetes cluster, connected over a reverse tunnel
      • [x] Login with tsh login, check that tsh kube ls has your cluster
      • [x] Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh
      • [x] Verify that the audit log recorded the above request and session
    • [x] Deploy a second kubernetes_service inside of another Kubernetes cluster, connected over a reverse tunnel
      • [x] Login with tsh login, check that tsh kube ls has both clusters
      • [x] Switch to a second cluster using tsh kube login
      • [x] Run kubectl get nodes, kubectl exec -it $SOME_POD -- sh on the new cluster
      • [x] Verify that the audit log recorded the above request and session
    • [x] Deploy combo auth/proxy/kubernetes_service outside of a Kubernetes cluster, using a kubeconfig with multiple clusters in it
      • [x] Login with tsh login, check that tsh kube ls has all clusters
    • [x] Test Kubernetes screen in the web UI (tab is located on left side nav on dashboard):
      • [x] Verify that all kubes registered are shown with correct name and labels
      • [x] Verify that clicking on a rows connect button renders a dialogue on manual instructions with Step 2 login value matching the rows name column
      • [x] Verify searching for name or labels in the search bar works
      • [x] Verify you can sort by name colum

    Teleport with FIPS mode @alistanis @r0mant

    • [x] Perform trusted clusters, Web and SSH sanity check with all teleport components deployed in FIPS mode.

    ACME @rudream

    • [ ] Teleport can fetch TLS certificate automatically using ACME protocol.

    Migrations @hugoShaka

    • [x] Migrate trusted clusters from 9.3 to 10.0
      • [x] Migrate auth server on main cluster, then rest of the servers on main cluster SSH should work for both main and old clusters
      • [x] Migrate auth server on remote cluster, then rest of the remote cluster SSH should work

    Command Templates

    When interacting with a cluster, the following command templates are useful:

    OpenSSH

    # when connecting to the recording proxy, `-o 'ForwardAgent yes'` is required.
    ssh -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %[email protected] -s proxy:%h:%p" \
      node.example.com
    
    # the above command only forwards the agent to the proxy, to forward the agent
    # to the target node, `-o 'ForwardAgent yes'` needs to be passed twice.
    ssh -o "ForwardAgent yes" \
      -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %[email protected] -s proxy:%h:%p" \
      node.example.com
    
    # when connecting to a remote cluster using OpenSSH, the subsystem request is
    # updated with the name of the remote cluster.
    ssh -o "ProxyCommand ssh -o 'ForwardAgent yes' -p 3023 %[email protected] -s proxy:%h:%[email protected]" \
      node.foo.com
    

    Teleport

    # when connecting to a OpenSSH node, remember `-p 22` needs to be passed.
    tsh --proxy=proxy.example.com --user=<username> --insecure ssh -p 22 node.example.com
    
    # an agent can be forwarded to the target node with `-A`
    tsh --proxy=proxy.example.com --user=<username> --insecure ssh -A -p 22 node.example.com
    
    # the --cluster flag is used to connect to a node in a remote cluster.
    tsh --proxy=proxy.example.com --user=<username> --insecure ssh --cluster=foo.com -p 22 node.foo.com
    

    Teleport with SSO Providers @ptgott @Tener

    • [ ] G Suite install instructions work
      • [ ] G Suite Screenshots are up to date
    • [ ] Azure Active Directory (AD) install instructions work
      • [ ] Azure Active Directory (AD) Screenshots are up to date
    • [ ] ActiveDirectory (ADFS) install instructions work
      • [ ] Active Directory (ADFS) Screenshots are up to date
    • [ ] Okta install instructions work
      • [ ] Okta Screenshots are up to date
    • [ ] OneLogin install instructions work
      • [ ] OneLogin Screenshots are up to date
    • [ ] GitLab install instructions work
      • [ ] GitLab Screenshots are up to date
    • [ ] OIDC install instructions work
      • [ ] OIDC Screenshots are up to date
    • [ ] All providers with guides in docs are covered in this test plan

    tctl sso family of commands @Tener

    tctl sso configure helps to construct a valid connector definition:

    • [x] tctl sso configure github ... creates valid connector definitions
    • [x] tctl sso configure oidc ... creates valid connector definitions
    • [x] tctl sso configure saml ... creates valid connector definitions

    tctl sso test test a provided connector definition, which can be loaded from file or piped in with tctl sso configure or tctl get --with-secrets. Valid connectors are accepted, invalid are rejected with sensible error messages.

    • [x] Connectors can be tested with tctl sso test.
      • [x] GitHub
      • [x] SAML
      • [x] OIDC
        • [x] Google Workspace
        • [x] Non-Google IdP

    Teleport Plugins @marcoandredinis

    • [x] Test receiving a message via Teleport Slackbot
    • [x] Test receiving a new Jira Ticket via Teleport Jira

    AWS Node Joining @nklaassen

    Docs

    • [x] On EC2 instance with ec2:DescribeInstances permissions for local account: TELEPORT_TEST_EC2=1 go test ./integration -run TestEC2NodeJoin
    • [x] On EC2 instance with any attached role: TELEPORT_TEST_EC2=1 go test ./integration -run TestIAMNodeJoin
    • [x] EC2 Join method in IoT mode with node and auth in different AWS accounts
    • [x] IAM Join method in IoT mode with node and auth in different AWS accounts

    Passwordless @r0mant @espadolini

    Passwordless requires tsh compiled with libfido2 for most operations (apart from Touch ID). Ask for a statically-built tsh binary for realistic tests.

    Touch ID requires a properly built and signed tsh binary. Ask for a pre-release binary so you may run the tests.

    This sections complements "Users -> Managing MFA devices". Ideally both macOS and Linux tsh binaries are tested for FIDO2 items.

    • [x] Diagnostics

      Both commands should pass all tests.

      • [x] tsh fido2 diag
      • [x] tsh touchid diag
    • [ ] Registration

      • [ ] Register a passworldess FIDO2 key (tsh mfa add, choose WEBAUTHN and passwordless)
      • [ ] Register a Touch ID credential (tsh mfa add, choose TOUCHID)
    • [ ] Login

      • [ ] Passwordless login using FIDO2 (tsh login --auth=passwordless)
      • [ ] Passwordless login using Touch ID (tsh login --auth=passwordless)
      • [ ] tsh login --auth=passwordless --mfa-mode=cross-platform uses FIDO2
      • [ ] tsh login --auth=passwordless --mfa-mode=platform uses Touch ID
      • [ ] tsh login --auth=passwordless --mfa-mode=auto prefers Touch ID
      • [ ] Passwordless disable switch works (auth_service.authentication.passwordless = false)
      • [ ] Cluster in passwordless mode defaults to passwordless (auth_service.authentication.connector_name = passwordless)
      • [ ] Cluster in passwordless mode allows MFA login (tsh login --auth=local)
    • [ ] Touch ID support commands

      • [ ] tsh touchid ls works
      • [ ] tsh touchid rm works (careful, may lock you out!)

    WEB UI @kimlisa @rudream @hatched

    Main

    For main, test with a role that has access to all resources.

    Top Nav

    • [x] Verify that cluster selector displays all (root + leaf) clusters
    • [x] Verify that user name is displayed
    • [x] Verify that user menu shows logout, help&support, and account settings (for local users)

    Side Nav

    • [x] Verify that each item has an icon
    • [x] Verify that Collapse/Expand works and collapsed has icon >, and expand has icon v
    • [x] Verify that it automatically expands and highlights the item on page refresh

    Servers aka Nodes

    • [x] Verify that "Servers" table shows all joined nodes
    • [x] Verify that "Connect" button shows a list of available logins
    • [x] Verify that "Hostname", "Address" and "Labels" columns show the current values
    • [x] Verify that "Search" by hostname, address, labels works
    • [x] Verify that terminal opens when clicking on one of the available logins
    • [x] Verify that clicking on Add Server button renders dialogue set to Automatically view
      • [x] Verify clicking on Regenerate Script regenerates token value in the bash command
      • [x] Verify using the bash command successfully adds the server (refresh server list)
      • [x] Verify that clicking on Manually tab renders manual steps
      • [x] Verify that clicking back to Automatically tab renders bash command

    Applications

    • [x] Verify that clicking on Add Application button renders dialogue
      • [x] Verify input validation (prevent empty value and invalid url)
      • [x] Verify after input and clicking on Generate Script, bash command is rendered
      • [x] Verify clicking on Regenerate button regenerates token value in bash command

    Databases

    • [x] Verify that clicking on Add Database button renders dialogue for manual instructions:
      • [x] Verify selecting different options on Step 4 changes Step 5 commands

    Active Sessions

    • [x] Verify that "empty" state is handled
    • [x] Verify that it displays the session when session is active
    • [x] Verify that "Description", "Session ID", "Users", "Nodes" and "Duration" columns show correct values
    • [x] Verify that "OPTIONS" button allows to join a session

    Audit log

    • [x] Verify that time range button is shown and works
    • [x] Verify that clicking on Session Ended event icon, takes user to session player
    • [x] Verify event detail dialogue renders when clicking on events details button
    • [x] Verify searching by type, description, created works

    Users

    • [x] Verify that users are shown
    • [x] Verify that creating a new user works
    • [x] Verify that editing user roles works
    • [x] Verify that removing a user works
    • [x] Verify resetting a user's password works
    • [x] Verify search by username, roles, and type works

    Auth Connectors

    • [x] Verify when there are no connectors, empty state renders
    • [ ] Verify that creating OIDC/SAML/GITHUB connectors works
    • [ ] Verify that editing OIDC/SAML/GITHUB connectors works
    • [x] Verify that error is shown when saving an invalid YAML
    • [ ] Verify that correct hint text is shown on the right side
    • [ ] Verify that encrypted SAML assertions work with an identity provider that supports it (Azure).
    • [ ] Verify that created github, saml, oidc card has their icons

    Roles

    • [x] Verify that roles are shown
    • [x] Verify that "Create New Role" dialog works
    • [x] Verify that deleting and editing works
    • [x] Verify that error is shown when saving an invalid YAML
    • [x] Verify that correct hint text is shown on the right side

    Managed Clusters

    • [x] Verify that it displays a list of clusters (root + leaf)
    • [x] Verify that every menu item works: nodes, apps, audit events, session recordings, etc.

    Help & Support

    • [x] Verify that all URLs work and correct (no 404)

    Access Requests

    Access Request is a Enterprise feature and is not available for OSS.

    Creating Access Requests (Role Based)

    Create a role with limited permissions allow-roles-and-nodes. This role allows you to see the Role screen and ssh into all nodes.

    kind: role
    metadata:
      name: allow-roles-and-nodes
    spec:
      allow:
        logins:
        - root
        node_labels:
          '*': '*'
        rules:
        - resources:
          - role
          verbs:
          - list
          - read
      options:
        max_session_ttl: 8h0m0s
    version: v5
    
    

    Create another role with limited permissions allow-users-with-short-ttl. This role session expires in 4 minutes, allows you to see Users screen, and denies access to all nodes.

    kind: role
    metadata:
      name: allow-users-with-short-ttl
    spec:
      allow:
        rules:
        - resources:
          - user
          verbs:
          - list
          - read
      deny:
        node_labels:
          '*': '*'
      options:
        max_session_ttl: 4m0s
    version: v5
    

    Create a user that has no access to anything but allows you to request roles:

    kind: role
    metadata:
      name: test-role-based-requests
    spec:
      allow:
        request:
          roles:
          - allow-roles-and-nodes
          - allow-users-with-short-ttl
          suggested_reviewers:
          - random-user-1
          - random-user-2
    version: v5
    
    • [x] Verify that under requestable roles, only allow-roles-and-nodes and allow-users-with-short-ttl are listed
    • [x] Verify you can select/input/modify reviewers
    • [x] Verify you can view the request you created from request list (should be in pending states)
    • [x] Verify there is list of reviewers you selected (empty list if none selected AND suggested_reviewers wasn't defined)
    • [x] Verify you can't review own requests

    Creating Access Requests (Search Based)

    Create a role with access to searcheable resources (apps, db, kubes, nodes, desktops). The template searcheable-resources is below.

    kind: role
    metadata:
      name: searcheable-resources
    spec:
      allow:
        app_labels:  # just example labels
          label1-key: label1-value
          env: [dev, staging] 
        db_labels:
          '*': '*'   # asteriks gives user access to everything
        kubernetes_labels:
          '*': '*' 
        node_labels:
          '*': '*'
        windows_desktop_labels:
          '*': '*'
    version: v5
    

    Create a user that has no access to resources, but allows you to search them:

    kind: role
    metadata:
      name: test-search-based-requests
    spec:
      allow:
        request:
          search_as_roles:
          - searcheable resources
          suggested_reviewers:
          - random-user-1
          - random-user-2
    version: v5
    
    • [x] Verify that a user can see resources based on the searcheable-resources rules
    • [x] Verify you can select/input/modify reviewers
    • [x] Verify you can view the request you created from request list (should be in pending states)
    • [x] Verify there is list of reviewers you selected (empty list if none selected AND suggested_reviewers wasn't defined)
    • [x] Verify you can't review own requests
    • [x] Verify that you can't mix adding resources from different clusters (there should be a warning dialogue that clears the selected list)

    Viewing & Approving/Denying Requests

    Create a user with the role reviewer that allows you to review all requests, and delete them.

    kind: role
    version: v3
    metadata:
      name: reviewer
    spec:
      allow:
        review_requests:
          roles: ['*']
    
    • [x] Verify you can view access request from request list
    • [x] Verify you can approve a request with message, and immediately see updated state with your review stamp (green checkmark) and message box
    • [x] Verify you can deny a request, and immediately see updated state with your review stamp (red cross)
    • [x] Verify deleting the denied request is removed from list

    Assuming Approved Requests (Role Based)

    • [x] Verify that assuming allow-roles-and-nodes allows you to see roles screen and ssh into nodes
    • [x] After assuming allow-roles-and-nodes, verify that assuming allow-users-short-ttl allows you to see users screen, and denies access to nodes
      • [x] Verify a switchback banner is rendered with roles assumed, and count down of when it expires
      • [x] Verify switching back goes back to your default static role
      • [x] Verify after re-assuming allow-users-short-ttl role, the user is automatically logged out after the expiry is met (4 minutes)

    Assuming Approved Requests (Search Based)

    • [x] Verify that assuming approved request, allows you to see the resources you've requested.

    Assuming Approved Requests (Both)

    • [x] Verify assume buttons are only present for approved request and for logged in user
    • [x] Verify that after clicking on the assume button, it is disabled in both the list and in viewing
    • [x] Verify that after re-login, requests that are not expired and are approved are assumable again

    Access Request Waiting Room

    Strategy Reason

    Create the following role:

    kind: role
    metadata:
      name: waiting-room
    spec:
      allow:
        request:
          roles:
          - <some other role to assign user after approval>
      options:
        max_session_ttl: 8h0m0s
        request_access: reason
        request_prompt: <some custom prompt to show in reason dialogue>
    version: v3
    
    • [x] Verify after login, reason dialogue is rendered with prompt set to request_prompt setting
    • [x] Verify after clicking send request, pending dialogue renders
    • [x] Verify after approving a request, dashboard is rendered
    • [x] Verify the correct role was assigned

    Strategy Always

    With the previous role you created from Strategy Reason, change request_access to always:

    • [x] Verify after login, pending dialogue is auto rendered
    • [x] Verify after approving a request, dashboard is rendered
    • [x] Verify after denying a request, access denied dialogue is rendered
    • [x] Verify a switchback banner is rendered with roles assumed, and count down of when it expires
    • [x] Verify switchback button says Logout and clicking goes back to the login screen

    Strategy Optional

    With the previous role you created from Strategy Reason, change request_access to optional:

    • [x] Verify after login, dashboard is rendered as normal

    Terminal

    • [x] Verify that top nav has a user menu (Main and Logout)
    • [x] Verify that switching between tabs works on alt+[1...9]

    Node List Tab

    • [x] Verify that Cluster selector works (URL should change too)
    • [x] Verify that Quick launcher input works
    • [x] Verify that Quick launcher input handles input errors
    • [x] Verify that "Connect" button shows a list of available logins
    • [x] Verify that "Hostname", "Address" and "Labels" columns show the current values
    • [x] Verify that "Search" by hostname, address, labels work
    • [x] Verify that new tab is created when starting a session

    Session Tab

    • [x] Verify that session and browser tabs both show the title with login and node name
    • [x] Verify that terminal resize works
      • Install midnight commander on the node you ssh into: $ sudo apt-get install mc
      • Run the program: $ mc
      • Resize the terminal to see if panels resize with it
    • [x] Verify that session tab shows/updates number of participants when a new user joins the session
    • [x] Verify that tab automatically closes on "$ exit" command
    • [x] Verify that SCP Upload works
    • [x] Verify that SCP Upload handles invalid paths and network errors
    • [x] Verify that SCP Download works
    • [x] Verify that SCP Download handles invalid paths and network errors

    Session Player

    • [x] Verify that it can replay a session
    • [x] Verify that when playing, scroller auto scrolls to bottom most content
    • [x] Verify when resizing player to a small screen, scroller appears and is working
    • [x] Verify that error message is displayed (enter an invalid SID in the URL)

    Invite and Reset Form

    • [x] Verify that input validates
    • [x] Verify that invite works with 2FA disabled
    • [x] Verify that invite works with OTP enabled
    • [x] Verify that invite works with U2F enabled
    • [x] Verify that invite works with WebAuthn enabled
    • [x] Verify that error message is shown if an invite is expired/invalid

    Login Form and Change Password

    • [x] Verify that input validates
    • [x] Verify that login works with 2FA disabled
    • [x] Verify that changing passwords works for 2FA disabled
    • [x] Verify that login works with OTP enabled
    • [x] Verify that changing passwords works for OTP enabled
    • [x] Verify that login works with U2F enabled
    • [x] Verify that changing passwords works for U2F enabled
    • [x] Verify that login works with WebAuthn enabled
    • [x] Verify that changing passwords works for WebAuthn enabled
    • [x] Verify that login works for Github/SAML/OIDC
    • [x] Verify that redirect to original URL works after successful login
    • [x] Verify that account is locked after several unsuccessful login attempts
    • [x] Verify that account is locked after several unsuccessful change password attempts

    Multi-factor Authentication (mfa)

    Create/modify teleport.yaml and set the following authentication settings under auth_service

    authentication:
      type: local
      second_factor: optional
      require_session_mfa: yes
      webauthn:
        rp_id: example.com
    

    MFA invite, login, password reset, change password

    • [x] Verify during invite/reset, second factor list all auth types: none, hardware key, and authenticator app
    • [x] Verify registration works with all option types
    • [x] Verify login with all option types
    • [x] Verify changing password with all option types
    • [x] Change second_factor type to on and verify that mfa is required (no option none in dropdown)

    MFA require auth

    Go to Account Settings > Two-Factor Devices and register a new device

    Using the same user as above:

    • [x] Verify logging in with registered WebAuthn key works
    • [x] Verify connecting to a ssh node prompts you to tap your registered WebAuthn key
    • [ ] Verify in the web terminal, you can scp upload/download files

    MFA Management

    • [x] Verify adding first device works without requiring re-authentication
    • [x] Verify re-authenticating with a WebAuthn device works
    • [x] Verify re-authenticating with a U2F device works
    • [x] Verify re-authenticating with a OTP device works
    • [x] Verify adding a WebAuthn device works
    • [x] Verify adding a U2F device works
    • [x] Verify adding an OTP device works
    • [x] Verify removing a device works
    • [x] Verify second_factor set to off disables adding devices

    Passwordless

    • [x] Pure passwordless registrations and resets are possible
    • [x] Verify adding a passwordless device (WebAuthn)
    • [x] Verify passwordless logins

    Cloud

    From your cloud staging account, change the field teleportVersion to the test version.

    $ kubectl -n <namespace> edit tenant
    

    Recovery Code Management

    • [x] Verify generating recovery codes for local accounts with email usernames works
    • [x] Verify local accounts with non-email usernames are not able to generate recovery codes
    • [x] Verify SSO accounts are not able to generate recovery codes

    Invite/Reset

    • [x] Verify email as usernames, renders recovery codes dialog
    • [x] Verify non email usernames, does not render recovery codes dialog

    Recovery Flow: Add new mfa device

    • [x] Verify recovering (adding) a new hardware key device with password
    • [x] Verify recovering (adding) a new otp device with password
    • [x] Verify viewing and deleting any old device (but not the one just added)
    • [x] Verify new recovery codes are rendered at the end of flow

    Recovery Flow: Change password

    • [x] Verify recovering password with any mfa device
    • [x] Verify new recovery codes are rendered at the end of flow

    Recovery Email

    • [x] Verify receiving email for link to start recovery
    • [x] Verify receiving email for successfully recovering
    • [x] Verify email link is invalid after successful recovery
    • [x] Verify receiving email for locked account when max attempts reached

    RBAC

    Create a role, with no allow.rules defined:

    kind: role
    metadata:
      name: rbac
    spec:
      allow:
        app_labels:
          '*': '*'
        logins:
        - root
        node_labels:
          '*': '*'
      options:
        max_session_ttl: 8h0m0s
    version: v3
    
    • [x] Verify that a user has access only to: "Servers", "Applications", "Databases", "Kubernetes", "Active Sessions", "Access Requests" and "Manage Clusters"
    • [x] Verify there is no Add Server, Application, Databases, Kubernetes button in each respective view
    • [x] Verify only Servers, Apps, Databases, and Kubernetes are listed under options button in Manage Clusters

    Note: User has read/create access_request access to their own requests, despite resource settings

    Add the following under spec.allow.rules to enable read access to the audit log:

      - resources:
          - event
          verbs:
          - list
    
    • [x] Verify that the Audit Log and Session Recordings is accessible
    • [x] Verify that playing a recorded session is denied

    Add the following to enable read access to recorded sessions

      - resources:
          - session
          verbs:
          - read
    
    • [x] Verify that a user can re-play a session (session.end)

    Add the following to enable read access to the roles

    - resources:
          - role
          verbs:
          - list
          - read
    
    • [x] Verify that a user can see the roles
    • [x] Verify that a user cannot create/delete/update a role

    Add the following to enable read access to the auth connectors

    - resources:
          - auth_connector
          verbs:
          - list
          - read
    
    • [x] Verify that a user can see the list of auth connectors.
    • [ ] Verify that a user cannot create/delete/update the connectors

    Add the following to enable read access to users

      - resources:
          - user
          verbs:
          - list
          - read
    
    • [x] Verify that a user can access the "Users" screen
    • [x] Verify that a user cannot reset password and create/delete/update a user

    Add the following to enable read access to trusted clusters

      - resources:
          - trusted_cluster
          verbs:
          - list
          - read
    
    • [x] Verify that a user can access the "Trust" screen
    • [x] Verify that a user cannot create/delete/update a trusted cluster.

    Performance/Soak Test @rosstimothy @espadolini

    Using tsh bench tool, perform the soak tests and benchmark tests on the following configurations:

    • Cluster with 10K nodes in normal (non-IOT) node mode with ETCD

    • Cluster with 10K nodes in normal (non-IOT) mode with DynamoDB

    • Cluster with 1K IOT nodes with ETCD

    • Cluster with 1K IOT nodes with DynamoDB

    • Cluster with 500 trusted clusters with ETCD

    • Cluster with 500 trusted clusters with DynamoDB

    Soak Tests

    Run 4hour soak test with a mix of interactive/non-interactive sessions:

    tsh bench --duration=4h user@teleport-monster-6757d7b487-x226b ls
    tsh bench -i --duration=4h user@teleport-monster-6757d7b487-x226b ps uax
    

    Observe prometheus metrics for goroutines, open files, RAM, CPU, Timers and make sure there are no leaks

    • [ ] Verify that prometheus metrics are accurate.

    Breaking load tests

    Load system with tsh bench to the capacity and publish maximum numbers of concurrent sessions with interactive and non interactive tsh bench loads.

    Teleport with Cloud Providers

    AWS @lxea

    • [x] Deploy Teleport to AWS. Using DynamoDB & S3
    • [x] Deploy Teleport Enterprise to AWS. Using HA Setup https://gravitational.com/teleport/docs/aws-terraform-guide/

    GCP @EdwardDowling

    • [x] Deploy Teleport to GCP. Using Cloud Firestore & Cloud Storage
    • [x] Deploy Teleport to GKE. Google Kubernetes engine.
    • [x] Deploy Teleport Enterprise to GCP.

    IBM @r0mant

    • [x] Deploy Teleport to IBM Cloud. Using IBM Database for etcd & IBM Object Store
    • [x] Deploy Teleport to IBM Cloud Kubernetes.
    • [x] Deploy Teleport Enterprise to IBM Cloud.

    Application Access @strideynet

    • [x] Run an application within local cluster.
      • [x] Verify the debug application debug_app: true works.
      • [x] Verify an application can be configured with command line flags.
      • [x] Verify an application can be configured from file configuration.
      • [x] Verify that applications are available at auto-generated addresses name.rootProxyPublicAddr and well as publicAddr.
    • [x] Run an application within a trusted cluster.
      • [x] Verify that applications are available at auto-generated addresses name.rootProxyPublicAddr.
    • [ ] Verify Audit Records.
      • [x] app.session.start and app.session.chunk events are created in the Audit Log.
      • [x] app.session.chunk points to a 5 minute session archive with multiple app.session.request events inside.
      • [ ] tsh play <chunk-id> can fetch and print a session chunk archive.
    • [x] Verify JWT using verify-jwt.go.
    • [x] Verify RBAC.
    • [x] Verify CLI access with tsh app login.
    • [x] Verify AWS console access.
      • [x] Can log into AWS web console through the web UI.
      • [x] Can interact with AWS using tsh aws commands.
    • [x] Verify dynamic registration.
      • [x] Can register a new app using tctl create.
      • [x] Can update registered app using tctl create -f.
      • [x] Can delete registered app using tctl rm.
    • [x] Test Applications screen in the web UI (tab is located on left side nav on dashboard):
      • [x] Verify that all apps registered are shown
      • [x] Verify that clicking on the app icon takes you to another tab
      • [x] Verify using the bash command produced from Add Application dialogue works (refresh app screen to see it registered)

    Database Access @smallinsky

    • [x] Connect to a database within a local cluster.
      • [x] Self-hosted Postgres @gabrielcorado
      • [x] Self-hosted MySQL @GavinFrazar
      • [x] Self-hosted MariaDB @greedy52
      • [x] Self-hosted MongoDB @Tener
      • [x] Self-hosted CockroachDB @gabrielcorado
      • [x] AWS Aurora Postgres @greedy52
      • [x] AWS Aurora MySQL @greedy52
      • [x] AWS Redshift @greedy52
      • [x] AWS ElastiCache @gabrielcorado
      • [x] AWS MemoryDB @greedy52
      • [x] GCP Cloud SQL Postgres @smallinsky
      • [x] GCP Cloud SQL MySQL @smallinsky
    • [x] Connect to a database within a remote cluster via a trusted cluster.
      • [x] Self-hosted Postgres @gabrielcorado
      • [x] Self-hosted MySQL @GavinFrazar
      • [x] Self-hosted MariaDB @greedy52
      • [x] Self-hosted MongoDB @Tener
      • [x] Self-hosted CockroachDB @gabrielcorado
      • [x] AWS Aurora Postgres @greedy52
      • [x] AWS Aurora MySQL @greedy52
      • [x] AWS Redshift @greedy52
      • [x] AWS ElastiCache @greedy52
      • [x] AWS MemoryDB @greedy52
      • [x] GCP Cloud SQL Postgres @smallinsky
      • [x] GCP Cloud SQL MySQL @smallinsky
    • [x] Verify audit events @Tener
      • [x] db.session.start is emitted when you connect.
      • [x] db.session.end is emitted when you disconnect.
      • [x] db.session.query is emitted when you execute a SQL query.
    • [x] Verify RBAC @smallinsky
      • [x] tsh db ls shows only databases matching role's db_labels.
      • [x] Can only connect as users from db_users.
      • [x] (Postgres only) Can only connect to databases from db_names.
        • [x] db.session.start is emitted when connection attempt is denied.
      • [x] (MongoDB only) Can only execute commands in databases from db_names.
        • [x] db.session.query is emitted when command fails due to permissions.
      • [x] Can configure per-session MFA.
        • [x] MFA tap is required on each tsh db connect.
    • [x] Verify dynamic registration @Tener
      • [x] Can register a new database using tctl create.
      • [x] Can update registered database using tctl create -f.
      • [x] Can delete registered database using tctl rm.
    • [x] Verify discovery @greedy52
      • [x] Can detect and register RDS instances.
      • [x] Can detect and register Aurora clusters, and their reader and custom endpoints.
      • [x] Can detect and register Redshift clusters.
      • [x] Can detect and register ElastiCache Redis clusters.
    • [x] Test Databases screen in the web UI (tab is located on left side nav on dashboard): @gabrielcorado
      • [x] Verify that all dbs registered are shown with correct name, description, type, and labels
      • [x] Verify that clicking on a rows connect button renders a dialogue on manual instructions with Step 2 login value matching the rows name column
      • [x] Verify searching for all columns in the search bar works
      • [x] Verify you can sort by all columns except labels

    TLS Routing @smallinsky

    • [x] Verify that teleport proxy v2 configuration starts only a single listener.
      version: v2
      teleport:
        proxy_service:
          enabled: "yes"
          public_addr: ['root.example.com']
          web_listen_addr: 0.0.0.0:3080
      
    • [x] Run Teleport Proxy in multiplex mode auth_service.proxy_listener_mode: "multiplex"
      • [x] Trusted cluster
        • [x] Setup trusted clusters using single port setup web_proxy_addr == tunnel_addr
        kind: trusted_cluster
        spec:
          ...
          web_proxy_addr: root.example.com:443
          tunnel_addr: root.example.com:443
          ...
        
    • [x] Database Access
      • [x] Verify that tsh db connect works through proxy running in multiplex mode
        • [x] Postgres
        • [x] MySQL
        • [x] MariaDB
        • [x] MongoDB
        • [x] CockroachDB
      • [x] Verify connecting to a database through TLS ALPN SNI local proxy tsh db proxy with a GUI client.
    • [x] Application Access
      • [x] Verify app access through proxy running in multiplex mode
    • [x] SSH Access
      • [x] Connect to a OpenSSH server through a local ssh proxy ssh -o "ForwardAgent yes" -o "ProxyCommand tsh proxy ssh" [email protected]
      • [x] Connect to a OpenSSH server on leaf-cluster through a local ssh proxyssh -o "ForwardAgent yes" -o "ProxyCommand tsh proxy ssh --user=%r --cluster=leaf-cluster %h:%p" [email protected]
      • [x] Verify tsh ssh access through proxy running in multiplex mode
    • [x] Kubernetes access: @GavinFrazar
      • [x] Verify kubernetes access through proxy running in multiplex mode

    Desktop Access

    Basic Sessions (@LKozlowski)

    • Direct mode (set listen_addr):
      • [x] Can connect to desktop defined in static hosts section.
      • [x] Can connect to desktop discovered via LDAP
    • IoT mode (reverse tunnel through proxy):
      • [x] Can connect to desktop defined in static hosts section.
      • [x] Can connect to desktop discovered via LDAP
    • [x] Connect multiple windows_desktop_services to the same Teleport cluster, verify that connections to desktops on different AD domains works. (Attempt to connect several times to verify that you are routed to the correct windows_desktop_service)

    User Input (@ibeckermayer)

    • Verify user input

      • [x] Download Keyboard Key Info and verify all keys are processed correctly in each supported browser. Known issues: F11 cannot be captured by the browser without special configuration on MacOS.
      • [x] Left click and right click register as Windows clicks. (Right click on the desktop should show a Windows menu, not a browser context menu)
      • [x] Vertical and horizontal scroll work. Horizontal Scroll Test
    • Locking and access (@ibeckermayer)

      • [x] Verify that placing a user lock terminates an active desktop session.
      • [x] Verify that placing a desktop lock terminates an active desktop session.
      • [x] Verify that placing a role lock terminates an active desktop session.
      • [x] Verify that connecting to a locked desktop fails.
      • [x] Set client_idle_timeout to a small value and verify that idle sessions are terminated (the session should end and an audit event will confirm it was due to idle connection)
    • Labeling (@LKozlowski)

      • [x] All desktops have teleport.dev/origin label.
      • [x] Dynamic desktops have additional teleport.dev labels for OS, OS Version, DNS hostname, and OU.
      • [x] Regexp-based host labeling applies across all desktops, regardless of origin.
      • [x] LDAP attribute labeling functions correctly
    • RBAC (@zmb3)

      • [x] RBAC denies access to a Windows desktop due to labels
      • [x] RBAC denies access to a Windows desktop with the wrong OS-login.
    • Clipboard Support (@zmb3)

      • When a user has a role with clipboard sharing enabled and is using a chromium based browser
        • [x] Going to a desktop when clipboard permissions are in "Ask" mode (aka "prompt") causes the browser to show a prompt while the UI shows a spinner
        • [x] X-ing out of the prompt (causing the clipboard permission to remain in "Ask" mode) causes the prompt to show up again
        • [x] Denying clibpoard permissions brings up a relevant error alert (with "Clipboard Sharing Disabled" in the top bar)
        • [x] Allowing clipboard permissions allows you to see the desktop session, with "Clipboard Sharing Enabled" highlighted in the top bar
        • [x] Copy text from local workstation, paste into remote desktop
        • [x] Copy text from remote desktop, paste into local workstation
        • [x] Copying unicode text also works in both directions
      • When a user has a role with clipboard sharing enabled and is not using a chromium based browser
        • [x] The UI shows a relevant alert and "Clipboard Sharing Disabled" is highlighted in the top bar
      • When a user has a role with clipboard sharing disabled and is using a chromium and non-chromium based browser (confirm both)
        • [x] The live session should show disabled in the top bar and copy/paste should not work between your workstation and the remote desktop.
        • [ ]
    • Per-Session MFA (try webauthn on each of Chrome, Safari, and Firefox) @zmb3

      • [x] Attempting to start a session no keys registered shows an error message
      • [x] Attempting to start a session with a webauthn registered pops up the "Verify Your Identity" dialog
        • [x] Hitting "Cancel" shows an error message
        • [x] Hitting "Verify" causes your browser to prompt you for MFA
        • [x] Cancelling that browser MFA prompt shows an error
        • [x] Successful MFA verification allows you to connect
    • Session Recording (@LKozlowski)

      • [x] Verify sessions are not recorded if all of a user's roles disable recording
      • [x] Verify sync recording (mode: node-sync or mode: proy-sync)
      • [x] Verify async recording (mode: node or mode: proxy)
      • [x] Sessions show up in session recordings UI with desktop icon
      • [x] Sessions can be played back, including play/pause functionality
      • [x] A session that ends with a TDP error message can be played back, ends by displaying the error message, and the progress bar progresses to the end.
      • [x] Attempting to play back a session that doesn't exist (i.e. by entering a non-existing session id in the url) shows a relevant error message.
      • [x] RBAC for sessions: ensure users can only see their own recordings when using the RBAC rule from our docs
    • Audit Events (check these after performing the above tests) (@ibeckermayer)

      • [x] windows.desktop.session.start (TDP00I) emitted on start
      • [x] windows.desktop.session.start (TDP00W) emitted when session fails to start (due to RBAC, for example)
      • [x] windows.desktop.session.end (TDP01I) emitted on end
      • [x] desktop.clipboard.send (TDP02I) emitted for local copy -> remote paste
      • [x] desktop.clipboard.receive (TDP03I) emitted for remote copy -> local paste

    Binaries compatibility @fheinecke

    • Verify that teleport/tsh/tctl/tbot run on:
      • [x] CentOS 7
      • [x] CentOS 8
      • [x] Ubuntu 18.04
      • [x] Ubuntu 20.04
      • [ ] Debian 9
    • Verify tsh runs on:
      • [x] Windows 10
      • [ ] MacOS

    Machine ID @timothyb89

    SSH

    With a default Teleport instance configured with a SSH node:

    • [x] Verify you are able to create a new bot user with tctl bots add robot --roles=access. Follow the instructions provided in the output to start tbot
    • [x] Verify you are able to connect to the SSH node using openssh with the generated ssh_config in the destination directory
    • [x] Verify that after the renewal period (default 20m, but this can be reduced via configuration), that newly generated certificates are placed in the destination directory
    • [x] Verify that sending both SIGUSR1 and SIGHUP to a running tbot process causes a renewal and new certificates to be generated

    Ensure the above tests are completed for both:

    • [x] Directly connecting to the auth server
    • [x] Connecting to the auth server via the proxy reverse tunnel

    DB Access

    With a default Postgres DB instance, a Teleport instance configured with DB access and a bot user configured:

    • [x] Verify you are able to connect to and interact with a database using tbot db while tbot start is running

    Teleport Connect @ravicious @gzdunek @avatus

    • Auth methods @ravicious
      • Verify that the app supports clusters using different auth settings (auth_service.authentication in the cluster config):
        • [x] type: local, second_factor: "off"
        • [x] type: local, second_factor: "otp"
        • [x] type: local, second_factor: "webauthn"
        • [x] type: local, second_factor: "optional", log in without MFA
        • [x] type: local, second_factor: "optional", log in with OTP
        • [x] type: local, second_factor: "optional", log in with hardware key
        • [x] type: local, second_factor: "on", log in with OTP
        • [x] type: local, second_factor: "on", log in with hardware key
        • Authentication connectors: @ravicious
          • For those you might want to use clusters that are deployed on the web, specified in parens. Or set up the connectors on a local enterprise cluster following the guide from our wiki.
          • [x] GitHub (asteroid)
            • [x] local login on a GitHub-enabled cluster
          • [x] SAML (platform cluster)
          • [x] OIDC (e-demo)
    • Shell @gzdunek
      • [x] Verify that the shell is pinned to the correct cluster (for root clusters and leaf clusters).
        • That is, opening new shell sessions in other workspaces or other clusters within the same workspace should have no impact on the original shell session.
      • [x] Verify that the local shell is opened with correct env vars.
        • TELEPORT_PROXY and TELEPORT_CLUSTER should pin the session to the correct cluster.
        • TELEPORT_HOME should point to ~/Library/Application Support/Teleport Connect/tsh.
        • PATH should include /Applications/Teleport Connect.app/Contents/Resources/bin.
      • [x] Verify that the working directory in the tab title is updated when you change the directory (only for local terminals).
      • [x] Verify that terminal resize works for both local and remote shells.
        • Install midnight commander on the node you ssh into: $ sudo apt-get install mc
        • Run the program: $ mc
        • Resize Teleport Connect to see if the panels resize with it
      • [x] Verify that the tab automatically closes on $ exit command.
    • State restoration @ravicious
      • [x] Verify that the app asks about restoring the previous tabs when launched and restores them properly.
      • [x] Verify that the app opens with the cluster that was active when you closed the app.
      • [x] Verify that the app remembers size & position after restart.
      • [x] Verify that reopening a cluster that has no workspace assigned works.
      • [x] Verify that reopening the app after removing ~/Library/Application Support/Teleport Connect/tsh doesn't crash the app.
      • [x] Verify that reopening the app after removing ~/Library/Application Support/Teleport Connect/app_state.json but not the tsh dir doesn't crash the app.
      • [x] Verify that logging out of a cluster and then logging in to the same cluster doesn't remember previous tabs (they should be cleared on logout).
    • Connections picker @ravicious
      • [x] Verify that the connections picker shows new connections when ssh & db tabs are opened.
      • [x] Check if those connections are available after the app restart.
      • [x] Check that those connections are removed after you log out of the root cluster that they belong to.
      • [x] Verify that reopening a db connection from the connections picker remembers last used port & database name.
    • Cluster resources (servers/databases) @gzdunek
      • [x] Verify that the app shows the same resources as the Web UI.
      • [x] Verify that search is working for the resources lists.
      • [x] Verify that you can connect to these resources.
      • [x] Verify that clicking "Connect" shows available logins and db usernames.
        • Logins and db usernames are taken from the role, under spec.allow.logins and spec.allow.db_users.
      • [x] Repeat the above steps for resources in leaf clusters. @ravicious
      • [x] Verify that tabs have correct titles set.
      • [x] Verify that the port number remains the same for a db connection between app restarts.
      • [x] Create a db connection, close the app, run tsh proxy db with the same port, start the app. Verify that the app doesn't crash and the db connection tab shows you the error (address in use) and offers a way to retry creating the connection.
    • Shortcuts @gzdunek
      • [x] Verify that switching between tabs works on Cmd+[1...9].
      • [x] Verify that other shortcuts are shown after you close all tabs.
      • [x] Verify that the other shortcuts work and each of them is shown on hover on relevant UI elements.
    • Workspaces @ravicious
      • [x] Verify that logging in to a new cluster adds it to the identity switcher and switches to the workspace of that cluster automatically.
      • [x] Verify that the state of the current workspace is preserved when you change the workspace (by switching to another cluster) and return to the previous workspace.
    • Command bar & autocomplete @gzdunek
      • Do the steps for the root cluster, then switch to a leaf cluster and repeat them.
      • [x] Verify that the autocomplete for tsh ssh filters SSH logins and autocompletes them.
      • [x] Verify that the autocomplete for tsh ssh filters SSH hosts by name and label and autocompletes them.
      • [x] Verify that launching an invalid tsh ssh command shows the error in a new tab.
      • [x] Verify that launching a valid tsh ssh command opens a new tab with the session opened.
      • [x] Verify that the autocomplete for tsh proxy db filters databases by name and label and autocompletes them.
      • [x] Verify that launching a tsh proxy db command opens a new local shell with the command running.
      • [x] Verify that the autocomplete for tsh ssh doesn't break when you cut/paste commands in various points.
      • [x] Verify that manually typing out what the autocomplete would suggest doesn't break the command bar.
      • [x] Verify that launching any other command that's not supported by the autocomplete opens a new local shell with that command running.
    • Resilience when resources become unavailable @gzdunek
      • For each scenario, create at least one tab for each available kind (minus k8s for now).
      • For each scenario, first do the external action, then click "Sync" on the relevant cluster tab. Verify that no unrecoverable error was raised. Then restart the app and verify that it was restarted gracefully (no unrecoverable error on restart, the user can continue using the app).
        • [x] Stop the root cluster.
        • [x] Stop a leaf cluster.
        • [x] Disconnect your device from the internet.
    • Refreshing certs @gzdunek
      • To test scenarios from this section, create a user with a role that has TTL of 1m (spec.options.max_session_ttl).
      • Log in, create a db connection and run the CLI command; wait for the cert to expire, click "Sync" on the cluster tab.
        • Verify that after successfully logging in:
          • [x] the cluster info is synced
          • [x] the connection in the running CLI db client wasn't dropped; try executing select now();, the client should be able to automatically reinstantiate the connection.
          • [x] the database proxy is able to handle new connections; click "Run" in the db tab and see if it connects without problems. You might need to resync the cluster again in case they managed to expire.
        • [x] Verify that closing the login modal without logging in shows an error related to syncing the cluster.
      • Log in; wait for the cert to expire, click "Connect" next to a db in the cluster tab.
        • [x] Verify that clicking "Connect" and then navigating to a different tab before the request completes doesn't show the login modal and instead immediately shows the error.
        • For this one, you might want to use a sever in our Cloud if the introduced latency is high enough. Perhaps enabling throttling in dev tools can help too.
      • [x] Log in; create two db connections, then remove access to one of the db servers for that user; wait for the cert to expire, click "Sync", verify that the db tab with no access shows an appropriate error and that the other db tab still handles old and new connections.
    • [x] Verify that logs are collected for all processes (main, renderer, shared, tshd) under ~/Library/Application\ Support/Teleport\ Connect/logs. @ravicious
    • [x] Verify that the password from the login form is not saved in the renderer log. @ravicious
    • [x] Log in to a cluster, then log out and log in again as a different user. Verify that the app works properly after that. @gzdunek

    Host users creation @jakule

    Host users creation docs Host users creation RFD

    • Verify host users creation functionality
      • [x] non-existing users are created automatically
      • [x] users are added to groups
        • [x] non existing configured groups are created
        • [x] created users are added to the teleport-system group
      • [x] users are cleaned up after their session ends
        • [ ] cleanup occurs if a program was left running after session ends
      • [x] sudoers file creation is successful
        • [ ] Invalid sudoers files are not created
      • [ ] existing host users are not modified
      • [x] setting disable_create_host_user: true stops user creation from occurring

    CA rotations @espadolini

    • Verify the CA rotation functionality itself (by checking in the backend or with tctl get cert_authority)
      • [x] standby phase: only active_keys, no additional_trusted_keys
      • [x] init phase: active_keys and additional_trusted_keys
      • [x] update_clients and update_servers phases: the certs from the init phase are swapped
      • [x] standby phase: only the new certs remain in active_keys, nothing in additional_trusted_keys
      • [x] rollback phase (second pass, after completing a regular rotation): same content as in the init phase
      • [x] standby phase after rollback: same content as in the previous standby phase
    • Verify functionality in all phases (clients might have to log in again in lieu of waiting for credentials to expire between phases)
      • [x] SSH session in tsh from a previous phase
      • [x] SSH session in web UI from a previous phase
      • [x] New SSH session with tsh
      • [x] New SSH session with web UI
      • [x] New SSH session in a child cluster on the same major version
      • [ ] New SSH session in a child cluster on the previous major version - blocked on #13793
      • [x] New SSH session from a parent cluster
      • [x] Application access through a browser
      • [x] Application access through curl with tsh app login
      • [x] kubectl get po after tsh kube login
      • [ ] Database access (no configuration change should be necessary if the database CA isn't rotated, other Teleport functionality should not be affected if only the database CA is rotated)

    IP-based validation

    SSH @probakowski

    • Verify IP-based validation works for SSH
      • [x] pin_source_ip: true option can be added in role definition
      • [x] tsh ssh works when invoked from the same machine/IP that was used for logging in
      • [x] tsh ssh prompts for relogin when invoked from different machine (copy certs after login)
      • [ ] connecting to sshd server works as above in both cases
      • [ ] ssh works as above in both cases
      • [x] SSH access from WebUI works with IP pinning enabled
    • [x] tsh status -d shows pinned IP
  • Idiomatic helm chart for Teleport

    Idiomatic helm chart for Teleport

    Hey, thanks a lot for maintaining the awesome project!

    Here is an (hopefully) idiomatic helm chart for Teleport, with instructions to test it locally using minikube + ngrok.

    I was just trying to run Teleport on Kubernetes for #1986 and thought a more idiomatic helm chart would be helpful as a foundation towards the upcoming 2.7.0 release. I'm sending this PR because I was unable to just wait for it :)

    Please feel free to leave questions, comments, requests, etc.

  • Reverse tunnels for individual nodes?

    Reverse tunnels for individual nodes?

    Are there any plans for support of individual nodes creating reverse tunnels to a proxy server without creating a new cluster? We have a case where we would like to have a single node setup at multiple different sites, but currently it looks like we would need to configure a cluster for each site just for a single node to use a reverse tunnel.

  • 2.0.6 to 2.2 alpha upgrade issue with DynamoDB backend

    2.0.6 to 2.2 alpha upgrade issue with DynamoDB backend

    Originally reported by @ekristen in #896 (in comments at the bottom):

    Well I upgraded to alpha8 and now I cannot add anymore nodes. Getting the cluster has no signing keys. There seems to be something with upgrading to a new version using dynamodb that breaks everything.

    Logs:

    level=warning 
    msg="[AUTH] Node \"server-001\" [11fbfa42-17b9-4cfe-a863-65e791663838] can not join:
          certificate generation error: my-cluster has no signing keys" 
    
    file="auth/auth.go:464" func="auth.(*AuthServer).GenerateServerKeys"
    

    I cannot since I've already upgraded. However I was on 2.0.6 and now I am on Teleport v2.2.0-alpha.8 git:v2.1.0-alpha.6-43-g14cf169d-dirty

    I can tell you that I've now seen this happen multiple times across multiple versions. I am using dynamodb as a backend. I attempted to replicate it using dir mode only and a single auth server and was unable to.

    After that I went back to using dynamodb with mulitple auth servers however I only use 1 when registering nodes, so while the other 2 are running the auth service nothing is talking to them.

    Everything seemed great for a while, I was able to add nodes and this bug seemed to not be present anymore until I upgraded to alpha8 and I completely lost the ability to register nodes again.

  • Kubernetes Secret storage for Agent's Identity

    Kubernetes Secret storage for Agent's Identity

    This MR introduces Kubernetes Secret storage for Teleport Kubernetes Agent state.

    When running Teleport Agent inside a Kubernetes cluster where TELEPORT_REPLICA_NAME and KUBE_NAMESPACE exist, the agent will store identities and states in a Kubernetes Secret instead of local storage.

    Otherwise, if the above environment variables are not present or not in Kubernetes, the agent persists state in SQLite.

    Notes:

    • Fresh installs will enable the Kubernetes Secret Storage by default.
    • If PV storage is configured, the agent will store its identity in a Secret and use the PV for non-identity objects.
    • If the operator is upgrading from an existing release that didn't support Secret storage, the Helm chart will convert the objects to enable Kube Secret Storage. If the current object is a Deployment, the Helm chart installs a new Statefulset side by side and will trigger a Helm hook that is responsible for deleting the Deployment once Statefulset pods are healthy. Otherwise, if upgrading from a Statefulset the agent will read the credentials stored in PV and move them into Kube Secret.

    Implementation of #12958 Related to #5585

  • Add support for automatic user provisioning

    Add support for automatic user provisioning

    Depends on:

    • #11077

    Implements from the rfd:

    • Allowing automatic creations of users on SSH nodes
    • Automatic deletion of users either periodically or once all their sessions end
    • Automatic group creation for teleport-system and any specified groups that dont already exit
    • Option to disable user creation on a specific node (needs testing & needs to disable scanning for undeleted users)

    Future work:

    • PR to add sudoers once this is more finalized [#12061]
  • Package the Installer for Common OS's

    Package the Installer for Common OS's

    It would make my life easier if the Installer were packaged for various Linux Distro's, but I am actually after it being added to Homebrew so that it'd be easier for my end users to install. It's something that I can tackle, but would prefer that it was integrated by you guys so that it doesn't fall out of date if I miss a release a cycle.

  • Teleport Operator

    Teleport Operator

    This PR adds a Kubernetes operator for Teleport: Teleport Operator.

    You can check the operator/README.md to test the Operator yourself.

    Note for reviewers:

    There are a lot of file changes, but most of them are auto-generated. Looking at each commit individually should ease the review process

    Interesting files:

    • crdgen: generates CRDs from the current API
    • sidecar: creates a Teleport Client using a specific User and Role
    • controllers/resources: reconcilers implementation

    There's an extra pipeline in .drone.yaml which will be removed when the PR is approved. It exists to test the steps we added to the current publish pipeline It also provides a container image we can use to test out the Operator

    RFD: https://github.com/gravitational/teleport-plugins/blob/master/rfd/0001-kubernetes-manager.md

  • teleport unable to bind to 3025

    teleport unable to bind to 3025

    It seems that when you start teleport it expects the auth_service to be running -

    ip-10-0-0-115 teleport # ./teleport start --roles=node --token=e66bdb3a03946a8f66347941f7196b6f --auth-server=publicip:3025
    dial tcp 10.0.0.115:3025: getsockopt: connection refused
    
  • Kubernetes Helm Guide should provide IAC flow

    Kubernetes Helm Guide should provide IAC flow

    Description

    When users follow https://goteleport.com/docs/deploy-a-cluster/helm-deployments/aws/ they get several wrong impressions:

    Teleport needs to create S3 buckets and DynamoDB

    The policies below are written in a way that allow Teleport to create DynamoDB/S3 buckets.

    Production admins don't like that, as they don't like infrastructure drift

    https://goteleport.com/docs/deploy-a-cluster/helm-deployments/aws/#dynamodb-iam-policy

    Instead, link Terraform examples and JSON commands to policies referring to existing tables and S3 buckets.

    No IAC way to create users/resources

    The guide above uses kubectl exec instead of using Terraform (IAC) or Kubernetes operator (that is not mentioned anywhere in the guide as far as I see, what is strange, as it's K8s guide):

    https://github.com/gravitational/teleport/tree/master/operator#teleport-kubernetes-operator

  • Add `is_flexi_server` to database Azure proto

    Add `is_flexi_server` to database Azure proto

    This PR adds the bool flag for flexi servers. Making this is a separate PR from https://github.com/gravitational/teleport/pull/19759 to keep the massive proto diff out of the implementation PR.

    Protobuf only change. The other PR contains code updates for fileconf and service configuration.

  • [v11] Add redirects to the new Audit Events section (#19553)

    [v11] Add redirects to the new Audit Events section (#19553)

    The changes in #17405 added a section to the docs for guides to exporting audit events, and moved guides from docs/pages/management/guides, but failed to add redirects. This change adds the missing redirects.

  • Skip device authentication based on Ping

    Skip device authentication based on Ping

    AttemptDeviceLogin, which is the main entry point for device authentication, now checks the Ping response and skips the attempt entirely if device trust is disabled.

    The main objective is to avoid a needless roundtrip if the feature is disabled, as one should only pay for what is in use.

    There's actual little consequence in attempting the roundtrip, apart from the added latency on logins, so I've gone with a negative flag ("Disabled" instead of "Enabled"). The negative is less harmful if, for some reason, it's wrongly absent (say, because of some future Ping code branch).

    https://github.com/gravitational/teleport.e/issues/514

fido-ident: a cli tool for getting the attestation certificate from a fido token.

fido-ident fido-ident is a cli tool for getting the attestation certificate from a fido token. fido-ident will print the raw certificate and the human

Jan 28, 2022
Microservice generates pair of access and refresh JSON web tokens signed by user identifier.

go-jwt-issuer Microservice generates pair access and refresh JSON web tokens signed by user identifier. ?? Deployed on Heroku Run tests: export SECRET

Nov 21, 2022
An authentication proxy for Google Cloud managed databases
An authentication proxy for Google Cloud managed databases

db-auth-gateway An authentication proxy for Google Cloud managed databases. Based on the ideas of cloudsql-proxy but intended to be run as a standalon

Dec 5, 2022
Package gorilla/securecookie encodes and decodes authenticated and optionally encrypted cookie values for Go web applications.

securecookie securecookie encodes and decodes authenticated and optionally encrypted cookie values. Secure cookies can't be forged, because their valu

Dec 26, 2022
Package goth provides a simple, clean, and idiomatic way to write authentication packages for Go web applications.

Goth: Multi-Provider Authentication for Go Package goth provides a simple, clean, and idiomatic way to write authentication packages for Go web applic

Dec 29, 2022
๐Ÿ”‘ Authz0 is an automated authorization test tool. Unauthorized access can be identified based on URL and Role.
๐Ÿ”‘ Authz0 is an automated authorization test tool. Unauthorized access can be identified based on URL and Role.

Authz0 is an automated authorization test tool. Unauthorized access can be identified based on URL and Role. URLs and Roles are managed as YAML-based

Dec 20, 2022
Oct 1, 2022
An authorization library that supports access control models like ACL, RBAC, ABAC in Golang
An authorization library that supports access control models like ACL, RBAC, ABAC in Golang

Casbin News: still worry about how to write the correct Casbin policy? Casbin online editor is coming to help! Try it at: https://casbin.org/editor/ C

Jan 2, 2023
goRBAC provides a lightweight role-based access control (RBAC) implementation in Golang.

goRBAC goRBAC provides a lightweight role-based access control implementation in Golang. For the purposes of this package: * an identity has one or mo

Dec 29, 2022
An authorization library that supports access control models like ACL, RBAC, ABAC in Golang
An authorization library that supports access control models like ACL, RBAC, ABAC in Golang

Casbin News: still worry about how to write the correct Casbin policy? Casbin online editor is coming to help! Try it at: https://casbin.org/editor/ C

Jan 4, 2023
Role Based Access Control (RBAC) with database persistence

Authority Role Based Access Control (RBAC) Go package with database persistence Install First get authority go get github.com/harranali/authority Next

Dec 8, 2022
Key-Checker - Go scripts for checking API key / access token validity
Key-Checker - Go scripts for checking API key / access token validity

Key-Checker Go scripts for checking API key / access token validity Update V1.0.0 ?? Added 37 checkers! Screenshoot ?? How to Install go get github.co

Dec 19, 2022
Prevent unauthorised access of public endpoints by for example bots or bad clients.
Prevent unauthorised access of public endpoints by for example bots or bad clients.

Anonymus API Auth Provider Inspired by: https://hackernoon.com/improve-the-security-of-api-keys-v5kp3wdu Architecture The basic idea is, to prevent un

Nov 28, 2021
Prevent unauthorised access of public endpoints by for example bots or bad clients.
Prevent unauthorised access of public endpoints by for example bots or bad clients.

Anonymous API Auth Provider Inspired by: https://hackernoon.com/improve-the-security-of-api-keys-v5kp3wdu Architecture The basic idea is, to prevent u

Nov 28, 2021
An example module for k6.io to get a cognito access token using USER_SRP_AUTH flow.

xk6-cognito An example module for k6.io to get a cognito access token using USER_SRP_AUTH flow. See: to create k6 extension: https://github.c

Nov 15, 2022
SSH Manager - manage authorized_keys file on remote servers

SSH Manager - manage authorized_key file on remote servers This is a simple tool that I came up after having to on-boarding and off-boarding developer

Dec 6, 2022
Minimalistic RBAC package for Go applications

RBAC Overview RBAC is a package that makes it easy to implement Role Based Access Control (RBAC) models in Go applications. Download To download this

Oct 25, 2022
The mep-agent module provides proxy services for 3rd applications to MEP.

Mep-Agent Introduction Mep-Agent is a middleware that provides proxy services for third-party apps. It can help apps, which do not implement the ETSI

Mar 9, 2022
jwt package for gin go applications

gin-jwt jwt package for gin go applications Usage Download using go module: go get github.com/ennaque/gin-jwt Import it in your code: import gwt "gith

Apr 21, 2022