K3s Getting Started Guide - Detailed Tutorial on Running K3s in Docker

K3s Getting Started Guide - Detailed Tutorial on Running K3s in Docker

What is k3d?

k3d is a small program for running a K3s cluster in Docker. K3s is a lightweight Kubernetes distribution and sandbox project certified by CNCF. It is designed for resource-constrained environments and is packaged as a single binary file that requires less than 512MB of RAM. To learn more about K3s, check out our previous articles and videos on Bilibili.

k3d launches multiple K3s nodes in Docker containers on any machine with Docker installed, using Docker images built from the K3s repository. In this way, a physical (or virtual) machine (called Docker Host) can run multiple K3s clusters, each with multiple server and agent nodes.

What can k3d do?

In January 2021, k3dv4.0.0 was released, which includes the following features:

  • Create/stop/start/delete/scale up/scale down K3s clusters (and individual nodes)
  • Via command line flags
  • Via configuration file
  • Manage and interact with container image registries that can be used with the cluster
  • Managing the cluster’s Kubeconfigs
  • Import the image from the local Docker daemon into the container runtime running in the cluster

Obviously, there are many more ways you can adjust the details of the process.

What is the use of k3d?

The main application scenario of k3d is local development on Kubernetes. Due to the lightweight and simple features of k3d, there are almost no troubles and resource usage issues in this scenario. The original intention of developing k3d was to provide developers with a simple tool that enables them to run a lightweight Kubernetes cluster on their development machine, so as to get fast iteration time in a production-like environment (much faster than running docker-compose locally with Kubernetes in production).

Over time, k3d has also evolved into an operations tool for testing certain Kubernetes (or specifically K3s) features in an isolated environment. For example, with k3d you can easily create a multi-node cluster, deploy some applications on it, easily stop a node and see how Kubernetes reacts, and also be able to reschedule your application to other nodes.

Additionally, you can use k3d in your continuous integration system to quickly spin up a cluster, deploy your test stack on it, and run integration tests. Once you are finished, you can easily deactivate the entire cluster. No need to worry about proper cleanup and possible residue.

We also provide a k3d-dind image (similar to the dream within a dream in the movie Inception, we have a container within a container within a container.) With this you can create a docker-in-docker environment running k3d, which spawns a K3s cluster in Docker. This means that you have only one container (k3d-dind) running on your Docker host, which in turn has the entire K3s/Kubernetes cluster running inside it.

How to use k3d?

1. Install k3d (you can also install kubectl if needed)

Note: This article has version requirements, please use at least k3d v4.1.1 or above

2. Try one of the following examples, or use the documentation or CLI help text to find your own way (k3d [command] --help)

The “simple” way

k3d cluster create

This command will create a K3s cluster with two containers: a Kubernetes control plane node (server) and a load balancer (serverlb) in front of it. It places them all in a dedicated Docker network and exposes the Kubernetes API on a randomly chosen free port on the Docker host. It also creates a volume named Docker in the background in preparation for the image import.

By default, if you do not provide a name parameter, the cluster will be named k3s-default and the containers will appear as k3d- - -<#>, so in this example the two containers will appear as k3d-k3s-default-serverlb and k3d-k3s-default-server-0

k3d waits for everything to be ready, pulls the Kubeconfig from the cluster and merges it with the default Kubeconfig (usually located in $HOME/.kube/config or whatever path the KUBECONFIG environment variable points to).
Don't worry, you can adjust that behavior, too.

Use kubectl to see what you just created to display the nodes: kubectl get nodes
k3d also gives you some commands to list what you've created: k3d cluster | node | registry list

The “simple but sophisticated” approach

k3d cluster create mycluster --api-port 127.0.0.1:6445 --servers 3 --agents 2 --volume '/home/me/mycode:/code@agent[*]' --port '8080:80@loadbalancer'

This command generates a K3s cluster with six containers: * 1 load balancer * 3 servers (control plane nodes) * 2 agents (formerly worker nodes)

With --api-port 127.0.0.1:6445 you can use k3d to map the Kubernetes API port (6443 internally) to port 6445 of 127.0.0.1/localhost. This means that you would then include the following connection string in your Kubeconfig: server: https://127.0.0.1:6445 to connect to this cluster.
This port will be mapped from the load balancer to your host system. From there the requests will be proxied to the server nodes, effectively simulating a production setup where a server node may fail and you want to failover to another server.

--volume /home/me/mycode:/code@agent[ ] bind mounts your local directory /home/me/mycode to the path /code inside all ([ ] agent nodes). Replace * with the index (0 or 1) to mount it to only one of the nodes.
The specification that tells k3d which node a volume should be mounted on is called a "node filter", it is also used for other flags such as the --port flag for port mapping.

That is, --port '8080:80@loadbalancer' maps port 8080 of the local host to port 80 on the load balancer (serverlb), which can be used to forward HTTP ingress traffic to the cluster. For example, a web application can be deployed into a cluster (Deployment) that is exposed externally (Service) through an Ingress such as myapp.k3d.localhost.

Then (provided everything is set up to resolve that domain to your localhost IP), you can point your browser to http://myapp.k3d.localhost:8080 to access your app. Traffic then flows from your host to the load balancer through the Docker bridge interface. From there, it is proxied to the cluster and delivered to your application Pods via Ingress and Service.

NOTE: You will have to set up some mechanism to route myapp.k3d.localhost to your localhost IP (127.0.0.1).
The most common way is to use 127.0.0.1 in your /etc/hosts file
An entry for myapp.k3d.localhost (C:\Windows\System32\drivers\etc/hosts). .
However, this doesn't allow wildcards ( .localhost), so it might become a bit cumbersome after a while, so you might want to learn about dnsmasq (MacOS/
UNIX) or Acrylic (Windows) to ease the burden. Tip: You can use
Linux and openSUSE) to
automatically resolve .localhost domains to 127.0.0.1, which means you don't have to do it manually anymore. For example, if you wish to test via Ingress, you will need to set up your domain there.

One thing to note here is that if you create multiple server nodes, K3s will be assigned the --cluster-init flag, which means it will change K3s' default internal database (which defaults to SQLite) to etcd.

"Configure as Code" approach

Starting with k3d v4.0.0 (released in January 2021), we support using config files to configure everything you previously did with command-line flags (and maybe even more soon). At the time of writing, you can find the JSON schema for validating configuration files in the repo:
https://github.com/rancher/k3d/blob/092f26a4e27eaf9d3a5bc32b249f897f448bc1ce/pkg/config/v1alpha2/schema.json

Example configuration file:

# k3d configuration file, saved as eg /home/me/myk3dcluster.yaml
apiVersion: k3d.io/v1alpha2 # this will change in the future as we make everything more stable
kind: Simple # internally, we also have a Cluster config, which is not yet available externally
name: mycluster # name that you want to give to your cluster (will still be prefixed with `k3d-`)
servers: 1 # same as `--servers 1`
agents: 2 # same as `--agents 2`
kubeAPI: # same as `--api-port 127.0.0.1:6445`
  hostIP: "127.0.0.1"
  hostPort: "6445"
ports:
  - port: 8080:80 # same as `--port 8080:80@loadbalancer
    nodeFilters:
      - loadbalancer
options:
  k3d: # k3d runtime settings
    wait: true # wait for cluster to be usable before returining; same as `--wait` (default: true)
    timeout: "60s" # wait timeout before aborting; same as `--timeout 60s`
  k3s: # options passed on to K3s itself
    extraServerArgs: # additional arguments passed to the `k3s server` command
      - --tls-san=my.host.domain
    extraAgentArgs: [] # addditional arguments passed to the `k3s agent` command
  kubeconfig:
    updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)
switchCurrentContext: true # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)

Assuming we saved this as /home/me/myk3dcluster.yaml, we can use this to configure a new cluster
k3d cluster create --config /home/me/myk3dcluster.yaml

NOTE: You can still set additional parameters or flags, which will take precedence over (or will be merged with) any parameters you define in the configuration file.

What else can k3d do?

You can use k3d in many scenarios, for example:

  • Create a cluster with a k3d-hosted container registry
  • Rapid development with hot code reload using clusters
  • Use k3d in conjunction with other development tools such as Tilt or Skaffold
  • Both can use the image import function through k3d image import
  • Both can use k3d hosted repositories to speed up development cycles
  • Use k3d in your CI system (we provide a PoC for this: https://github.com/iwilltry42/k3d-demo/blob/main/.drone.yml)
  • Use the community maintained vscode extension (https://github.com/inercia/vscode-k3d) to integrate it into your
  • Use it in vscode workflow to set up high availability of K3s

You can try all of this out for yourself by using the scripts prepared in this demo repo:
https://github.com/iwilltry42/k3d-demo.

THORSTEN KLEIN
DevOps engineer at trivago, freelance software engineer at SUSE, and maintainer of k3d.

The above is the detailed content of the detailed tutorial of the k3d Getting Started Guide to Running K3s in Docker. For more information about running K3s in Docker, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • How to run postgreSQL with docker
  • How to run MySQL using docker-compose
  • Use Docker to run multiple PHP versions on the server
  • How to dynamically add a volume to a running Docker container

<<:  CSS implements Google Material Design text input box style (recommended)

>>:  A brief discussion on the mysql execution process and sequence

Recommend

Vue.js $refs usage case explanation

Despite props and events, sometimes you still nee...

Two box models in web pages (W3C box model, IE box model)

There are two types of web page box models: 1: Sta...

Detailed explanation of the underlying encapsulation of Java connection to MySQL

This article shares the Java connection MySQL und...

JavaScript knowledge: Constructors are also functions

Table of contents 1. Definition and call of const...

Navigation Design and Information Architecture

<br />Most of the time when we talk about na...

MySQL 5.7.27 winx64 installation and configuration method graphic tutorial

This article shares the installation and configur...

A detailed introduction to Linux memory management and addressing

Table of contents 1. Concept Memory management mo...

Vue uses the video tag to implement video playback

This article shares the specific code of Vue usin...

How to deploy hbase using docker

Standalone hbase, let’s talk about it first. Inst...

MySql multi-condition query statement with OR keyword

The previous article introduced the MySql multi-c...

Solution to span width not being determined in Firefox or IE

Copy code The code is as follows: <html xmlns=...

Correct way to load fonts in Vue.js

Table of contents Declare fonts with font-face co...

Vue realizes the progress bar change effect

This article uses Vue to simply implement the cha...

How to clean up data in MySQL online database

Table of contents 01 Scenario Analysis 02 Operati...

Using front-end HTML+CSS+JS to develop a simple TODOLIST function (notepad)

Table of contents 1. Brief Introduction 2. Run sc...