Table des matières
4 billet(s) pour janvier 2026
| AWX sur K8S Kind - partage de fichier pour les blob - Execution pods | 2026/01/26 10:15 | Jean-Baptiste |
| Notes rsh rcp | 2026/01/21 18:08 | Jean-Baptiste |
| Git - Duplication d'un dépôt | 2026/01/19 10:22 | Jean-Baptiste |
| Exemple simple de conf Nagios | 2026/01/14 10:07 | Jean-Baptiste |
Notes Prometheus
Voir :
- PromQL
Voir aussi :
- Prometheus VictoriaMetrics victoriadb
- Alertmanager (as part of Prometheus)
Voir Identity correlation / CorrelationId
Serveur
#docker run -p 9090:9090 -v ~/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus mkdir -p prometheus/nodes #sudo docker run -p 9090:9090 -v ~/prometheus:/etc/prometheus prom/prometheus podman run -p 9090:9090 -v ~/prometheus:/etc/prometheus docker.io/prom/prometheus
Config
prometheus.yml
# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. # static_configs: # - targets: ['localhost:9090'] - job_name: 'node' file_sd_configs: - files: [ "/etc/prometheus/nodes/*.yml" ]
prometheus/nodes/vm1.yml
- targets: [ "172.17.0.1:9100" ] labels: # Vous pouvez ajouter ce que vous voulez pour taguer la machine host: "vm1"
Exporter (client)
Voir aussi :
- check-mk-agent
Créer un compte node_exporter
/etc/systemd/system/node_exporter.service
[Unit] Description=node_exporter Wants=network-online.target After=network-online.target [Service] User=node_exporter Group=node_exporter Restart=on-failure Type=simple ExecStart=/usr/local/bin/node_exporter --collector.systemd --collector.ntp --collector.processes --collector.tcpstat [Install] WantedBy=multi-user.target
Grafana
Prometheus
Voir
docker run -d --name=grafana -p 3000:3000 grafana/grafana
Voir https://grafana.com/docs/grafana/latest/administration/configure-docker/
docker-compose.yml
version: '3.7' services: grafana: image: grafana/grafana ports: - 3000:3000 volume: - "$PWD/data:/var/lib/grafana"
Run Grafana container with persistent storage (recommended)
Create a persistent volume for your data in /var/lib/grafana (database and plugins)
docker volume create grafana-storage
Start grafana
docker run -d -p 3000:3000 --name=grafana -v grafana-storage:/var/lib/grafana grafana/grafana
Run Grafana container using bind mounts
You may want to run Grafana in Docker but use folders on your host for the database or configuration. When doing so, it becomes important to start the container with a user that is able to access and write to the folder you map into the container.
mkdir data # creates a folder for your data ID=$(id -u) # saves your user id in the ID variable # starts grafana with your user id and using the data folder #docker run -d --user $ID --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 grafana/grafana:7.2.1 podman run --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 docker.io/grafana/grafana:7.2.1
Notes profiling kernel linux perf
Notes Presse-papier
Voir :
- Qlipper
- ClipIt
- Kpcli
- xclip
- xsel
- autocutsel
- pastebinit
http://tech.dcolon.org/wordpress/copy-and-paste-for-keepass-under-linux/
autocutsel & autocutsel -s PRIMARY &
Notes Postgres
Voir :
Voir le client pgcli en ligne de commande avec autocomplétion et coloration syntaxique http://blog.adminrezo.fr/2016/01/mycli-pgcli-mysql-postregsql-clients/
Schéma arbre hiérarchie CSV
Postgres HA
- WITNESS-SERVER
Voir :
Supervision :
- temBoard Agent
DSN: pgsql:host=localhost;port=5432;dbname=testdb;user=myuser;password=mypass (See PDO PostgreSQL)
Notes
Création de la DB
sudo su - postgres psql
CREATE ROLE myuser WITH LOGIN PASSWORD 'P@ssw0rd'; CREATE DATABASE mydatabase OWNER myuser;
Connexion
psql -U myuser -h hostname -d mydatabase
Ou avec un fichier
- ~/.pgpass
hostname:port:database:username:password
Échapper les caractères comme ':' du mot de passe avec un antislash
Réindex
sudo -u postgres reindexdb --all
Se connecter à une socket
Commande psql
\l show databases \d show all \dt show tables \ef edit function \x row / line select
echo-hidden - commande cachées - psql détaillé commande SQL
Avoir le détail des commandes cachées que psql fait Par exemple pour avoir le detail de la commande \dt
C'est possible avec l'option --echo-hidden ou -E
$ env PGPASSWORD=$TF_VAR_pgpass psql -E -q -h jbl1-rdsscm-dev-env.cuapezqvgl58.eu-central-1.rds.amazonaws.com -U $TF_VAR_pguser --dbname=$TF_VAR_pgname
postgres=> \dt
********* QUERY **********
SELECT n.nspname as "Schema",
c.relname as "Name",
CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' WHEN 'p' THEN 'table' WHEN 'I' THEN 'index' END as "Type",
pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','p','')
AND n.nspname <> 'pg_catalog'
AND n.nspname <> 'information_schema'
AND n.nspname !~ '^pg_toast'
AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**************************
Requêtes système
SELECT * FROM pg_stat_activity;
Export les requêtes en cours dans un fichier CSV
psql -h /tmp -p 5432 -q --csv -c "SELECT * FROM pg_stat_activity;" 2>>/tmp/log_query.err | gzip > /tmp/log_query_$(date +%Y-%m-%d-%H%M).csv.gz
Config
Recomandation RedHat pour AAP
max_connections == 1024 shared_buffers == ansible_memtotal_mb*0.3 work_mem == ansible_memtotal_mb*0.03 maintenance_work_mem == ansible_memtotal_mb*0.04
Voir aussi :
Ansible
#!/usr/bin/ansible-playbook --- - name: Postgres Select Example hosts: localhost gather_facts: false tasks: - name: Select from users table postgresql_query: login_host: db1.acme.local login_user: pg_user login_password: "P@ssw0rd!" login_db: db1 login_port: 5455 query: "SELECT * FROM users LIMIT 10;" register: db_result - name: DEBUG 10 debug: var: db_result
Autres
Lors d'une mise à jour sous Debian
o resolve the situation, before upgrading, execute: # su - postgres $ pg_lsclusters $ pg_ctlcluster 9.4 main start $ pg_dumpall --cluster 9.4/main | pigz > 9.4-main.dump.gz $ cp -a /etc/postgresql/9.4/main 9.4-main.config $ pg_dropcluster 9.4 main --stop Then after the upgrade, execute: # su - postgres $ pg_createcluster 9.4 main $ cp 9.4-main.config/* /etc/postgresql/9.4/main $ pg_ctlcluster 9.4 main start $ zcat 9.4-main.dump.gz | psql -q $ rm -rf 9.4-main.config 9.4-main.dump.gz
Force drop db while others may be connected
A tester :
Autres
SELECT DB,COUNT(*) FROM performance_schema.processlist GROUP BY DB;
Notes Postgres Python
Voir :
Voir aussi :
- pg8000
- python-sqlalchemy
Exemple
Vacuum
dbname = 'dbname' user = 'postgres' host = '192.168.1.10' password = 'password' import psycopg2 c = "dbname='%s' user='%s' host='%s' password='%s'" conn = psycopg2.connect(c % (dbname, user, host, password)) conn.set_session(autocommit=True) cur=conn.cursor() cur.execute("VACUUM FULL ANALYSE") cur.close() conn.close()
Query select - Fetch
cur=conn.cursor() cur.execute("SELECT plop.purge()") if cur.rowcount > 0: row = cur.fetchone() else: row = None while row is not None: print(row) row = cur.fetchone() cur.close() conn.commit() conn.close()
with statement
Source : https://www.psycopg.org/docs/usage.html#with-statement
conn = psycopg2.connect(DSN) with conn: with conn.cursor() as curs: curs.execute(SQL1) with conn: with conn.cursor() as curs: curs.execute(SQL2) conn.close()
Warning
Unlike file objects or other resources, exiting the connection’s with block doesn’t close the connection, but only the transaction associated to it. If you want to make sure the connection is closed after a certain point, you should still use a try-catch block :
conn = psycopg2.connect(DSN) try: # connection usage finally: conn.close()
