Table des matières
4 billet(s) pour janvier 2026
| AWX sur K8S Kind - partage de fichier pour les blob - Execution pods | 2026/01/26 10:15 | Jean-Baptiste |
| Notes rsh rcp | 2026/01/21 18:08 | Jean-Baptiste |
| Git - Duplication d'un dépôt | 2026/01/19 10:22 | Jean-Baptiste |
| Exemple simple de conf Nagios | 2026/01/14 10:07 | Jean-Baptiste |
Notes Kubernetes k8s - Pb
kind - ErrImageNeverPull
Voir :
kind load docker-image hello-python:latest hello-python:latest kubectl apply -f deployment.yaml # --validate=false
# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-python-67978d6b66-spc7d 0/1 ErrImageNeverPull 0 4h50m 10.244.0.21 kind-control-plane <none> <none> hello-python-67978d6b66-vmv27 0/1 ErrImageNeverPull 0 4h50m 10.244.0.20 kind-control-plane <none> <none>
Solution
crictl images est équivalent à docker images
Diag:
root@vmdeb01:~# docker exec -ti kind-control-plane /bin/bash root@kind-control-plane:/# crictl images
kubectl delete deployment hello-python docker build -f Dockerfile -t hello-python:v0.1 . kind load docker-image hello-python:latest hello-python:v0.1
deployment.yaml
apiVersion: v1 kind: Service metadata: name: hello-python-service spec: selector: app: hello-python ports: - protocol: "TCP" port: 6000 targetPort: 5000 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: hello-python spec: selector: matchLabels: app: hello-python replicas: 2 template: metadata: labels: app: hello-python spec: containers: - name: hello-python #image: hello-python:latest image: hello-python:v0.1 # <--- Solution imagePullPolicy: Never # <--- Solution ports: - containerPort: 5000
Pb Status Error
Voir :
root@vmdeb01:~# kubectl get pods NAME READY STATUS RESTARTS AGE hello-python-7954bd58df-7qhj6 0/1 CrashLoopBackOff 4 (22s ago) 117s hello-python-7954bd58df-v4bmx 0/1 CrashLoopBackOff 4 (36s ago) 117s
- # kubectl logs hello-python-7954bd58df-7qhj6 -c <CONTAINER_NAME>
- kubectl logs hello-python-7954bd58df-7qhj6
python: can't open file '/app/main.py': [Errno 2] No such file or directory </code>
kubectl get pods -l app=myapp-deployment
Pb Kubeadm 2
kubeadm join vmdeb02:6443 --token ujwgb5.we2fa5y7z1vtzsmd --discovery-token-ca-cert-hash sha256:fdbc20cfef538613e872378e5a0e0305fd5de2caaa04db3d159633086eb30d7c [preflight] Running pre-flight checks error execution phase preflight: couldn't validate the identity of the API Server: Get "https://vmdeb02:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp 192.168.100.12:6443: connect: connection refused To see the stack trace of this error execute with --v=5 or higher
Le port 6443 n'est pas en écoute sur le Master
Solution
Sur le master
kubeadm reset
kubeadm init --control-plane-endpoint=192.168.100.12:6443 --skip-phases=addon/kube-proxy
Pb réseau pod still ContainerCreating starte
Voir :
# kubectl get pods -n kube-system |egrep -v "Running" NAME READY STATUS RESTARTS AGE coredns-76f75df574-4pqxw 0/1 ContainerCreating 0 38m coredns-76f75df574-lfdvp 0/1 ContainerCreating 0 38m weave-net-f9p5b 0/2 CrashLoopBackOff 18 (46s ago) 33m weave-net-qj9zd 1/2 CrashLoopBackOff 18 (80s ago) 33m root@vmdeb02:~# kubectl describe pod -n kube-system weave-net-f9p5b |tail |grep -v Normal Warning BackOff 2m26s kubelet Back-off restarting failed container weave in pod weave-net-f9p5b_kube-system(51e1d7d8-fe7f-4394-9b53-212ac3dbb865) Warning Unhealthy 2m10s (x7 over 2m56s) kubelet Readiness probe failed: Get "http://127.0.0.1:6784/status": dial tcp 127.0.0.1:6784: connect: connection refused
Forbiden
"Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml": deployments.apps "dashboard-metrics-scraper" i s forbidden: unable to create new content in namespace kubernetes-dashboard because it is being terminated"
Solution
kubectl -n kubernetes-dashboard delete pod,svc --all kubectl -n kubernetes-dashboard delete pod,svc --all --force --grace-period 0
Pb access Kubernetes Dashboard Error trying to reach service: 'dial tcp 10.244.2.4:8443: i/o timeout'
ssh -L8001:localhost:8001 kub1 sudo kubectl proxy
Error trying to reach service: 'dial tcp 10.244.2.8:8443: i/o timeout'
Kubernetes-dashboard
kubectl --namespace=kubernetes-dashboard port-forward kubernetes-dashboard-b7ffbc8cb-2kwxp 8443 curl 127.0.0.1:8001/api
Solution
ssh -L8443:10.244.2.8:8443 kub3
Après nous avons le choix :
- Please select the kubeconfig file that you have created to configure access to the cluster. To find out more about how to configure and use kubeconfig file, please refer to the Configure Access to Multiple Clusters section.
- Every Service Account has a Secret with valid Bearer Token that can be used to log in to Dashboard. To find out more about how to configure and use Bearer Tokens, please refer to the Authentication section.
Pb l'external-ip reste en "pending'
$ kubectl get services nginx-web-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-web-svc LoadBalancer 10.105.197.167 <pending> 80:32618/TCP 18h
Probablement qu'il n'y a pas de Ingress controller.
Solution : NodePort
kubectl edit services nginx-web-svc
Changer
''type: LoadBalancer'' en ''type: NodePort''
Voir aussi type: ClusterIP
Pb Metrics-server - tls: failed to verify certificate:
$ kubectl -n kube-system describe deploy metrics-server |grep ^Selector: Selector: k8s-app=metrics-server $ kubectl -n kube-system get pods -l k8s-app=metrics-server NAME READY STATUS RESTARTS AGE metrics-server-587b667b55-wt67b 1/1 Running 0 11m
metrics-server-587b667b55-wt67b -n kube-system
I0924 21:15:49.105305 1 server.go:191] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0924 21:15:57.723402 1 scraper.go:149] "Failed to scrape node" err="Get \"https://192.168.100.21:10250/metrics/resource\": tls: failed to verify certificate: x509: cannot validate certificate for 192.168.100.21 because it doesn't contain any IP SANs" node="vmdeb01.local" E0924 21:15:57.726365 1 scraper.go:149] "Failed to scrape node" err="Get \"https://192.168.100.22:10250/metrics/resource\": tls: failed to verify certificate: x509: cannot validate certificate for 192.168.100.22 because it doesn't contain any IP SANs" node="vmdeb02
Solution
kubectl patch deployment metrics-server -n kube-system --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'
ou
kubectl edit deploy metrics-server -n kube-system
spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 0 type: RollingUpdate template: metadata: creationTimestamp: null labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=10250 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls # <-- Ajouter cette ligne
http://www.mtitek.com/tutorials/kubernetes/install-kubernetes-metrics-server.php
Autres
--kubelet-preferred-address-types=InternalIP
Notes Kubernetes k8s - Outils
Voir :
Voir aussi :
- Octant / Lens
Arkade
Gestionnaire de paquets pour installer des outils K8S
kubectl
Afin d’interagir avec Kubernetes, kubectl - la CLI permettant d'exécuter des commandes sur un cluster - est nécessaire.
En outre, pour peu qu'on ait à interagir avec plusieurs clusters, avec différents crédentials, dans plusieurs namespaces, il devient difficile de jongler de l'un à l'autre. Afin de faciliter cela, des programmes existent:
Krew
Krew is the plugin manager for kubectl command-line tool.
asdf plugin add krew asdf install krew latest asdf set --home krew latest
Kubetail
kubectx / kubens
Voir :
kubectl config
Permet de changer le contexte courant et le namespace courant
Voir :
apt install kubectx
Migrating to Kubectx and Kubens From KUBECONFIG
# Reference all your config files so Kubectl load them all $ export KUBECONFIG=~/.kube/cluster-1:~/.kube/cluster-2:~/.kube/cluster-3 # Save a merged version of the current config to a new file $ kubectl config view --flatten > ~/.kube/.config
kube-ps1
Permet d'afficher au prompt les contexte et namespace courants
Helm
Templating & package manager
Helm est un package manager pour kubernetes, permettant de déployer facilement des composants sous forme de “Charts”.
Install
sudo snap install helm --classic
Dépôts Bitnami :
https://github.com/bitnami/charts/tree/main/bitnami
Exemple avec OpenEBS
helm repo add openebs https://openebs.github.io/charts helm repo update helm search repo openebs helm show values openebs/openebs > value-openebs.yml kubectl create ns openebs helm install openebs openebs/openebs -f value-openebs.yml -n openvbs helm upgrade openebs openebs/openebs --namespace openvbs --set legacy.enable=true --reuse-vakues
k9s
snap install k9s --devmode ln -s /snap/k9s/current/bin/k9s /usr/local/bin/
popeye
stern
nerdctl
Équivalent de la commande docker / podman / crictl / ctr mais pour ContainerD
LinkerD
(Conçurent de Istio)
Istio
(Conçurent de LinkerD)
Kubectl (kubernetes-client)
Install de kubectl
sudo apt-get install kubernetes-client
ou
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt) curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
Config
mkdir $HOME/.config/ kubectl completion bash >> $HOME/.config/bash_completion
Conf
${HOME}/.kube/config
ou $KUBECONFIG
~/.bashrc
alias k=kubectl complete -F __start_kubectl k alias kall="kubectl api-resources --namespaced=true -o name | xargs -i kubectl get {} -o name" alias kapp="kubectl apply -f" alias kdel="kubectl delete -f" alias ksys="kubectl -n kube-system" kshow() { kubectl get "$@" -o yaml | vim -c "set ft=yaml" -c "g/^ managedFields:/;/^ name/-1d" -c "noh" - } alias kports='kubectl get pods -o custom-columns="POD:.metadata.name,PORTS:.spec.containers[*].ports[*].containerPort"' alias kcc='kubectl config current-context' alias kg='kubectl get' alias kga='kubectl get all --all-namespaces' alias kgp='kubectl get pods' alias kgs='kubectl get services'
Dashboard
Installation
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
dashboard-admin.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
kubectl apply -f dashboard-admin.yaml
Accès
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
# kubectl -n NAMESPACE create token SERVICE_ACCOUNT kubectl -n kubernetes-dashboard create token admin-user
Source : https://upcloud.com/resources/tutorials/deploy-kubernetes-dashboard
Pour avoir les graphiques il faut installer Metrics-server
Stockage
Old
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml #kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
# kubectl proxy Starting to serve on 127.0.0.1:8001
Notes WebUI Rancher
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest
Notes Kubernetes k8s - Install
Voir :
- Microshift (OpenShift OKD)
Voir aussi :
Archi
Il est conseillé de configurer un pool de 3 nœuds minimum pour optimiser le rolling upgrade lors des mises à jour de sécurité (patch updates) ou de version mineure.
Install
Kind
Voir :
chmod +x /usr/local/bin/kind kind create cluster kubectl cluster-info --context kind-kind
Install de kubectl Install de Minikube
Install de Docker Desktop Docker swarm
https://kubernetes.io/docs/tasks/tools/
- Kubectl
- Kind
- Minicube
- Kubeadm
k3s
Voir :
Voir aussi k0s
K3d (K3S dans un container)
kubeinit / kubespray
Deploiement de K8S avec Ansible
Minikube (limité)
Basé sur KVM ou VirtualBox
Voir :
# Start a cluster using the kvm2 driver: minikube start --driver=kvm2 # To make kvm2 the default driver: minikube config set driver kvm2 minikube addons enable metrics-server minikube dashboard
Minikube crée un Cluster Kube en créant des VMs
minikube start --vm-driver=none minikube status minicube ip ssh docker@<ip> # Pass: tcuser
$ minikube addons list |-----------------------------|----------|--------------|--------------------------------| | ADDON NAME | PROFILE | STATUS | MAINTAINER | |-----------------------------|----------|--------------|--------------------------------| | ambassador | minikube | disabled | 3rd party (Ambassador) | | auto-pause | minikube | disabled | minikube | | cloud-spanner | minikube | disabled | Google | | csi-hostpath-driver | minikube | disabled | Kubernetes | | dashboard | minikube | enabled ✅ | Kubernetes | | default-storageclass | minikube | enabled ✅ | Kubernetes | | efk | minikube | disabled | 3rd party (Elastic) | | freshpod | minikube | disabled | Google | | gcp-auth | minikube | disabled | Google | | gvisor | minikube | disabled | minikube | | headlamp | minikube | disabled | 3rd party (kinvolk.io) | | helm-tiller | minikube | disabled | 3rd party (Helm) | | inaccel | minikube | disabled | 3rd party (InAccel | | | | | [info@inaccel.com]) | | ingress | minikube | disabled | Kubernetes | | ingress-dns | minikube | disabled | minikube | | inspektor-gadget | minikube | disabled | 3rd party | | | | | (inspektor-gadget.io) | | istio | minikube | disabled | 3rd party (Istio) | | istio-provisioner | minikube | disabled | 3rd party (Istio) | | kong | minikube | disabled | 3rd party (Kong HQ) | | kubeflow | minikube | disabled | 3rd party | | kubevirt | minikube | disabled | 3rd party (KubeVirt) | | logviewer | minikube | disabled | 3rd party (unknown) | | metallb | minikube | disabled | 3rd party (MetalLB) | | metrics-server | minikube | enabled ✅ | Kubernetes | | nvidia-device-plugin | minikube | disabled | 3rd party (NVIDIA) | | nvidia-driver-installer | minikube | disabled | 3rd party (NVIDIA) | | nvidia-gpu-device-plugin | minikube | disabled | 3rd party (NVIDIA) | | olm | minikube | disabled | 3rd party (Operator Framework) | | pod-security-policy | minikube | disabled | 3rd party (unknown) | | portainer | minikube | disabled | 3rd party (Portainer.io) | | registry | minikube | disabled | minikube | | registry-aliases | minikube | disabled | 3rd party (unknown) | | registry-creds | minikube | disabled | 3rd party (UPMC Enterprises) | | storage-provisioner | minikube | enabled ✅ | minikube | | storage-provisioner-gluster | minikube | disabled | 3rd party (Gluster) | | storage-provisioner-rancher | minikube | disabled | 3rd party (Rancher) | | volcano | minikube | disabled | third-party (volcano) | | volumesnapshots | minikube | disabled | Kubernetes | | yakd | minikube | disabled | 3rd party (marcnuri.com) | |-----------------------------|----------|--------------|--------------------------------|
kubeadm
Voir : How to Install Kubernetes Cluster on Debian 11-12
Initializes cluster master node
kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16
Initialize cluster networking
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
Minikube
minikube start minikube service list minikube update-context
Microk8s
Ubuntu
Voir :
snap install microk8s --classic microk8s.kubectl get nodes microk8s.kubectl get pod --namespace=kube-system
Apprendre K8S
A installer :
- Minicube
- Docker Desktop
- Kubernetes Vanilla
Cours en ligne :
- KodeCloud
Voir aussi :
Notes Kubernetes k8s - Install Node
Prérequis
Prérequis :
- Unique mac addresses et
/sys/class/dmi/id/product_uuid - Pas de swap
Debian
Voir :
Generic all Debians
# generic apt-get install -y yq vim atop tmux sudo # for apt-get apt-get install -y apt-transport-https ca-certificates curl gpg
A faire :
- Changer hostname
- Mettre à jour
/etc/hosts - Copier la clef SSH
ssh-copy-id - @ip /etc/network/interfaces.d/vlan100 ou netplan
apt-get install apt-get install network-manager
/etc/netplan/00-network-manager.yaml
network: version: 2 renderer: NetworkManager
Netplan avec NetworkManager bug -- Solution :
apt-get purge '*netplan*' systemctl disable --now systemd-networkd.service
chmod 600 /etc/netplan/00-network-manager.yaml netplan try netplan apply --debug netplan apply nmcli connection add con-name vlan100 ifname enp7s0 type ethernet ip4 192.168.100.21/24 nmcli connection up vlan100 hostnamectl hostname vmdeb01 #echo -e "$(hostname -I | awk '{print $2}')\t\t$(hostname)" >> /etc/hosts echo "192.168.100.21 vmdeb01.local vmdeb01" >> /etc/hosts echo "192.168.100.22 vmdeb02.local vmdeb02" >> /etc/hosts apt-get install openssh-server adduser admin echo "admin ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/admin
# sysctl params required by setup, params persist across reboots cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 EOF # Apply sysctl params without reboot sudo sysctl --system
Vérif
sysctl net.ipv4.ip_forward
Source : https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers
Old - conf Docker
groupadd -g 500000 dockremap groupadd -g 501000 dockremap-user useradd -u 500000 -g dockremap -s /bin/false dockremap useradd -u 501000 -g dockremap-user -s /bin/false dockremap-user echo "dockremap:500000:65536" >> /etc/subuid echo "dockremap:500000:65536" >> /etc/subgid
useradd is a low level utility for adding users. On Debian, administrators should usually use adduser(8) instead.
/etc/docker/daemon.json
{
"userns-remap": "default"
}
Old
/etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" }
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy sudo update-alternatives --set arptables /usr/sbin/arptables-legacy sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy
systemctl restart docker.service
Notes Kubernetes k8s - Diag
Voir :
sudo journalctl -f -u kubelet.service sudo journalctl -u containerd
kubectl cluster-info kubectl get componentstatus kubectl get ds -n kube-system kubectl get deploy -n kube-system kubectl get nodes kubectl get pods --field-selector status.phase!=Running -A kubectl events --types=Warning -A -w kubectl get events --all-namespaces --sort-by='.metadata.creationTimestamp'
Autre
- Use kubectl describe pod … to find the node running your Pod and the container ID (docker:…)
- SSH into the node
- Run docker exec -it -u root CONTAINER_ID /bin/bash
Si Metrics-server est installé
kubectl top node kubectl top pod --sort-by=memory -A
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Ulimits / Process ID limits and reservations
watch 'ps -e -w -o "thcount,cgname" --no-headers | awk "{a[\$2] += \$1} END{for (i in a) print a[i], i}" | sort --numeric-sort --reverse | head --lines=8'
