Tailscale in Docker with Serve/Funnel Support
Find a file
Robert Jensen 3a69a2c3c6
Update Dockerfile to latest release
Updated to v1.54.1
2023-12-07 21:21:45 +01:00
.github/workflows corrected path and removed non existing branch (#3) 2022-09-01 16:24:43 -04:00
docker-compose Set optional tailscaled flags with TAILSCALE_OPT (#6) 2023-04-04 10:28:36 -04:00
images Update Dockerfile to latest release 2023-12-07 21:21:45 +01:00
k8s/simple-example Version 1.0 (#2) 2022-09-01 14:51:23 -04:00
.envrc_template Version 1.0 (#2) 2022-09-01 14:51:23 -04:00
.gitignore Version 1.0 (#2) 2022-09-01 14:51:23 -04:00
LICENSE Initial commit 2022-08-22 13:17:40 -04:00
README.md Fix readme mention of Makefile 2022-09-01 15:00:39 -04:00

Tailscale in Docker without elevated privileges

See associated blog post: https://asselin.engineer/tailscale-docker

Set the TAILSCALE_AUTH_KEY with your own ephemeral auth key: https://login.tailscale.com/admin/settings/keys

docker-compose

The examples detailed below are in the docker-compose folder.

By default, no state is saved. The nodes are removed from the network when the tailscale container is terminated. This means the ip address is never the same. The stateful-example does save the tailscale node state to a docker volume.

Requirements:

Usage:

export TAILSCALE_AUTH_KEY="your-key"
# set which project is used
export PROJECT_DIRECTORY="docker-compose/simple-example"
# Sart with rebuild if necessary:
docker-compose --project-directory=${PROJECT_DIRECTORY} up -d --build
# Show logs and tail (follow):
docker-compose --project-directory=${PROJECT_DIRECTORY} logs --follow
# Stop:
docker-compose --project-directory=${PROJECT_DIRECTORY} down

simple-example

As explained in the blog post, uses a docker-compose service to add the container in the VPN.

complex-example

Not complex but more complex than the simple-example. A nginx layer is added. It manages two services in independent containers at urls /service-one and /service-two.

stateful-example

Same as simple-example but uses a volume to save state. The goal is to be able to reuse the same tailscale hostname and ip address. Useful in situations where the tailscale magic DNS cannot be used.

K8S

Same as the simple-example but on kubernetes.

Requirements:

Usage:

# Create cluster
kind create cluster --name tailscale
kubectl get nodes
# Deploy tailscale and demo webpage:
kubectl apply -f k8s/simple-example/deployment.yaml
# Delete cluster:
kind delete cluster --name tailscale