Table des matières
2 billet(s) pour janvier 2026
| Git - Duplication d'un dépôt | 2026/01/19 10:22 | Jean-Baptiste |
| Exemple simple de conf Nagios | 2026/01/14 10:07 | Jean-Baptiste |
Notes k8s - kind pour AWX
Voir :
Voir aussi :
- kinder
Avantage / Intérêts d'utiliser AWX :
- Gestion des secrets (natif ou hachicorp Vault)
- Traçabilité logs
- Scalabilité
- RBAC
- API REST
- Empêcher les accès concurrentiels
- WorkFlow AWX
- Pouvoir rejouer le même playbook, avec exactement les mêmes entrés en garantissant la reproductibilité via un Execution Environment
- Ansible Galaxy Integration (et gestion dépendances des collections / rôles)
- Contrôle des extravars (surveys)
- Planification (schedules) Reducing complexity
- EE : software dependencies, portability, content separation
Source : https://blog.stephane-robert.info/post/ansible-awx-operator-installation-kubernetes/
Voir aussi :
How to reference a local volume in Kind (kubernetes in container)
Il faut containerd >= 1.7
Une fois le cluster crée il n'est pas possible de le modifier. Prévoir de pouvoir restaurer la conf près un “delete” et un “create”
Voir aussi :
- containers-storage
Kube kind
https://github.com/containerd/nerdctl
https://kind.sigs.k8s.io/docs/user/rootless/
KIND_EXPERIMENTAL_PROVIDER=nerdctl # nerdctl / kind a besoin de connaitre le chemin de iptables export PATH=$PATH:/usr/sbin/
Install de kind via asdf
asdf plugin add kind asdf install kind latest asdf set --home kind latest
Pour Podman
Si nouvelle partition
mkfs.xfs -n ftype=1 -m reflink=1 /dev/mapper/vg_data-data
Pour le rootless : pas de homedir monté en noexec/nodev source : https://github.com/containers/podman/blob/main/rootless.md
export KIND_EXPERIMENTAL_PROVIDER=podman systemd-run --scope --user kind create cluster
Pour rootless avec iptables
/etc/modules-load.d/iptables.conf
ip6_tables ip6table_nat ip_tables iptable_nat
Si nftable à la place d'iptables
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 networking: ipFamily: ipv4 kubeProxyMode: "nftables"
Pour nerdctl
KIND_EXPERIMENTAL_PROVIDER=nerdctl kind create cluster
A titre d'exemple. Mais idéalement il faudrait ajouter un extraPortMappings et ExtraMount
Running as unit: run-p8566-i8567.scope; invocation ID: 066b7253045348e79515baad90cd38ad Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.34.0) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹 ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
install Nginx Ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Get kind images list :
nerdctl exec -ti kind-control-plane crictl images
Cet logs
kind export logs
kubectl port-forward svc/awx-service 3000:80
https://stackoverflow.com/questions/62432961/how-to-use-nodeport-with-kind
Delete
https://stackoverflow.com/questions/55672498/kubernetes-cluster-stuck-on-removing-pv-pvc
Erreur pv pvc https://www.datree.io/resources/kubernetes-troubleshooting-fixing-persistentvolumeclaims-error
Démarage auto
Ne fonctionne pas
nerdctl update --restart unless-stopped kind-control-plane
Serait-ce lié à https://github.com/containerd/nerdctl/issues/2286
Voir aussi podman generate systemd --new --files --name kind-control-plane et Quadlets https://www.redhat.com/en/blog/quadlet-podman
~/.config/systemd/user/container-kind-control-plane.service
[Unit] Description=Container kind-control-plane Wants=network-online.target After=network-online.target #RequiresMountsFor=%t/containers [Service] Delegate=yes #Type=notify Type=oneshot RemainAfterExit=yes #RemainAfterExit=yes Environment=PODMAN_SYSTEMD_UNIT=%n Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1003/bus Environment=XDG_RUNTIME_DIR=/run/user/1003 ExecStartPre=/bin/bash -c '/usr/bin/podman stop kind-control-plane &' ExecStartPre=/usr/bin/sleep 5 #Restart=on-failure RemainAfterExit=yes #TimeoutStopSec=60 ExecStart=/bin/bash -c '/usr/bin/podman start kind-control-plane &' ExecStop=/bin/bash '/usr/bin/podman stop kind-control-plane &' NotifyAccess=all [Install] WantedBy=default.target
Voir aussi :
systemctl --user daemon-reload systemctl --user enable container-kind-control-plane.service
Pb
Err failed to create fsnotify watcher - too many open files
kubectl get pods
kubectl logs -f awx-764564987d-wtw2f
failed to create fsnotify watcher: too many open files
Solution
/etc/sysctl.d/10-k8s.conf
# Raise inotify resource limits fs.inotify.max_user_instances = 1024 fs.inotify.max_user_watches = 524288
sysctl -p /etc/sysctl.d/10-k8s.conf
AWX
Voir :
kubectl get secret awx-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
~/.bashrc
function awx-manage() { # podman exec -ti kind-control-plane crictl exec -ti --name awx-task awx-manage "$@" nerdctl exec -ti kind-control-plane -- crictl exec -ti --name awx-task awx-manage "$@" }
nerdctl exec -ti kind-control-plane -- crictl exec -ti --name awx-task /bin/bash cd /tmp/ python3 -m venv ipython cd ipython/ source bin/activate pip install ipython export PYTHONPATH=/tmp/ipython/lib/python3.6/site-packages/ awx-manage shell_plus --ipython
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraMounts: - containerPath: /data/postgres-13 hostPath: /data/postgres-13 - containerPath: /files hostPath: /data/files readOnly: true - containerPath: /shares hostPath: /data/shares # Si module nf_tables est présent à la pace de iptables networking: kubeProxyMode: "nftables"
kind create cluster --config cluster-config.yml kubectl get pods -A -w git clone https://github.com/ansible/awx-operator.git cd awx-operator export NAMESPACE=awx kubectl create ns ${NAMESPACE} kubectl config set-context --current --namespace=$NAMESPACE export RELEASE_TAG=`curl -s https://api.github.com/repos/ansible/awx-operator/releases/latest | grep tag_name | cut -d '"' -f 4` git checkout $RELEASE_TAG cd config/manager ~/code/awx-operator/bin/kustomize edit set image controller=quay.io/ansible/awx-operator:0.14.0 cd ~/code/awx-operator/ ~/code/awx-operator/bin/kustomize build config/default | kubectl apply -f - kubectl get pods -w tee awx-pv.yml <<EOF --- apiVersion: v1 kind: PersistentVolume metadata: name: awx-postgres-13-volume spec: accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain capacity: storage: 8Gi storageClassName: local-path hostPath: path: /data/postgres-13 --- apiVersion: v1 kind: PersistentVolume metadata: name: awx-projects-volume spec: accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain capacity: storage: 2Gi storageClassName: local-path hostPath: path: /data/projects EOF kubectl apply -f awx-pv.yml kubectl get pv -w tee awx-pvc.yml <<EOF --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: awx-projects-claim spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 2Gi storageClassName: local-path EOF kubectl apply -f awx-pvc.yml kubectl get pvc -w tee awx-deployment.yml<<EOF --- apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx spec: # These parameters are designed for use with AWX Operator 0.29.0 # and AWX 21.6.0 postgres_configuration_secret: awx-postgres-configuration postgres_storage_class: local-path postgres_storage_requirements: requests: storage: 8Gi projects_persistence: true garbage_collect_secrets: false projects_existing_claim: awx-projects-claim postgres_init_container_resource_requirements: {} postgres_resource_requirements: {} web_resource_requirements: {} task_resource_requirements: {} ee_resource_requirements: {} service_type: ClusterIP ingress_type: ingress hostname: awx.robert.local ingress_annotations: | kubernetes.io/ingress.class: traefik EOF kubectl apply -f awx-deployment.yml kubectl get AWX -w # nerdctl exec -ti kind-control-plane bash # mkdir /data/postgres-13
SSH SFTP chroot
Voir aussi :
useradd partage # useradd is a low level utility for adding users. On Debian, administrators should usually use adduser(8) instead. groupadd sftpusers usermod -aG sftpusers partage mkdir /sftp mkdir /sftp/partage chown partage:partage /sftp/partage chmod 700 /sftp/partage
Test
sudo -u partage ls /sftp/partage
Si besoin
chmod o+x /sftp/
/etc/ssh/sshd_config
Subsystem sftp internal-sftp Match Group sftpusers ChrootDirectory /sftp/ ForceCommand internal-sftp -d /%u
systemctl restart sshd
Notes Ansible SemaphoreUI
Voir :
Voir aussi :
- Rundeck
- Polemarch
- Gitlab-CI
- AWX
Voir aussi autour d'AWX / Ansible :
- ARA Records Ansible
curl -X 'GET' -H 'Authorization: Bearer nb8jzkj3rcgoxej99onocburcsstghqhalnbsq5v6mg=' -H 'accept: text/plain; charset=utf-8' 'http://localhost:3000/api/project/1/templates' | jq .
Pb ssh - key type ssh-rsa not in PubkeyAcceptedKeyTypes
Sur AlmaLinux 8
# journalctl -u sshd -f août 27 11:42:28 plop.acme.local sshd[35283]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedKeyTypes [preauth]
Solution
# update-crypto-policies --show DEFAULT:NO-SHA1:NO-SSHCBC:NO-WEAKMAC # update-crypto-policies --set legacy Setting system policy to LEGACY Note: System-wide crypto policies are applied on application start-up. # update-crypto-policies --show LEGACY
Voir /etc/crypto-policies/back-ends/opensshserver.config
Ou (ne marche pas sous AlmaLinux 8)
/etc/ssh/sshd_config
PubkeyAcceptedAlgorithms=+ssh-rsa
Notes SSH proxy jump ProxyCommand
L’intérêt étant de ne pas déposer ailleurs sa clef SSH privée
ssh -t -A serveurBastionRebond ssh serveurPlop118
Ou plus simple :
~/.ssh/config
Host serveurPlop118
ProxyCommand ssh -W %h:%p bastion
### Si besion
Host bastion
Hostname 192.168.2.34
Avec sshpass
~/.ssh/config
Host l12* 192.168.* !pom01 !l12srvpom01 !192.168.50.160
User admin
ProxyCommand sshpass -e ssh -W %h:%p pom01
Host l12srvpom01 pom01
Hostname 192.168.50.160
User admin
read -s SSHPASS export SSHPASS
Pb
$ ssh -t -A bastion ssh -o StrictHostKeychecking=no 192.168.1.22 Permission denied (publickey). Shared connection to 171.33.90.69 closed
Solution
ssh -O stop bastion
ou
ssh -o ControlMaster=no 192.168.1.22
Exemple
~/.ssh/config
Host rebond
Hostname 192.168.89.155
User jean
Host old-rhel5
Hostname 192.168.50.20
User root
ProxyCommand ssh -W %h:%p rebond
KexAlgorithms +diffie-hellman-group1-sha1,diffie-hellman-group14-sha1
HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedKeyTypes +ssh-rsa
SetEnv TERM=linux
Host old-rhel3
Hostname 192.168.50.30
KexAlgorithms +diffie-hellman-group1-sha1,diffie-hellman-group14-sha1
HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedKeyTypes +ssh-rsa
Ciphers +aes256-cbc
SetEnv TERM=linux
Host centreon
Hostname 192.168.50.21
User root
ProxyCommand ssh -W %h:%p rebond
RemoteForward 3128 192.168.89.221:3128
LocalForward 8081 localhost:80
# SendEnv LANG LC_*
