{{tag>Brouillon}}
= Notes Elasticsearch Logstash Kibana
Voir :
* https://www.scaleway.com/en/docs/setup-elastic-stack-on-scaleway/
* https://linuxfr.org/news/amazon-opensearch-fruit-d-une-rivalite-avec-elastic
* https://www.youtube.com/watch?v=IJ1IkALLChI
* https://krakensystems.co/blog/2018/logstash-nginx-logs-part-1
Voir aussi :
* Loki (remplace Elasticsearch Logstash)
** Voir [[https://linuxfr.org/news/loki-centralisation-de-logs-a-la-sauce-prometheus|Loki, centralisation de logs à la sauce Prometheus]]
* https://vector.dev
* Metricbeat
* [[https://gist.github.com/g3rhard/b755db2aae0ecf5ee40c3ebf50ab520f|Promtail & Loki]]
**OpenSearch** remplace ElasticSearch
Vérif syntax Grok : https://grokdebug.herokuapp.com/
== Notes perso
A lire
* https://www.bmc.com/blogs/elasticsearch-filebeat-nginx/
* https://pawelurbanek.com/elk-nginx-logs-setup
API
* https://discuss.elastic.co/t/importing-dashboard-via-curl-fails-with-500-error/230421/6
* https://discuss.elastic.co/t/kibana-dashboard-import-api-not-working-from-api-export/208961/6
RSYSLOG
* https://medium.com/bolt-labs/using-json-for-nginx-log-format-793743064fc4
* https://www.elastic.co/fr/blog/how-to-centralize-logs-with-rsyslog-logstash-and-elasticsearch-on-ubuntu-14-04
* https://devconnected.com/monitoring-linux-logs-with-kibana-and-rsyslog/
Fluentd
* https://medium.com/@behroozam/how-to-parse-nginx-access-log-with-fluentd-and-send-it-to-elasticsearch-f66cf95bef43
NGINX JSON
* https://gist.github.com/NiceGuyIT/58dd4d553fe3017cbfc3f98c2fbdbc93
* https://community.centminmod.com/threads/how-to-configure-nginx-for-json-based-access-logging.19641/
* https://programmersought.com/article/68596876612/
DOCKER
* https://blog.atolcd.com/la-stack-elastic-donnees-metiers/
* https://maddevs.io/blog/log-collecting-with-elk-and-rsyslog/
== Elasticsearch
=== Config
''/etc/elasticsearch/jvm.options.d/mem.options''
-Xms512m
-Xmx512m
Sécurité :
Voir : https://www.elastic.co/guide/en/elasticsearch/reference/7.12/security-minimal-setup.html
''/etc/elasticsearch/elasticsearch.yml''
xpack.security.enabled: true
Attention, cette commande ne peut s’exécuter qu'une seule fois !
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
== Logstash
Voir aussi :
* Filebeat
* Fluentd
=== Config
Conf Java Mem
''/etc/logstash/jvm.options''
## JVM configuration
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
#-Xms1g
#-Xmx1g
-Xms512m
-Xmx512m
Exemple Nginx
Voir : https://www.elastic.co/guide/en/logstash/7.9/logstash-config-for-filebeat-modules.html#parsing-nginx
Note : préférer Filebeat
''/etc/logstash/conf.d/nginx-exemple.conf''
input {
file {
path => ["/var/log/nginx/access.log", "/var/log/nginx/error.log"]
type => "nginx"
}
}
filter {
if [fileset][module] == "nginx" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{@timestamp}" }
}
date {
match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[nginx][access][time]"
}
useragent {
source => "[nginx][access][agent]"
target => "[nginx][access][user_agent]"
remove_field => "[nginx][access][agent]"
}
geoip {
source => "[nginx][access][remote_ip]"
#target => "[nginx][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
remove_field => "message"
}
mutate {
rename => { "@timestamp" => "read_timestamp" }
}
date {
match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
remove_field => "[nginx][error][time]"
}
}
}
}
output {
elasticsearch {
hosts => localhost
#user => elastic
#password => PassWord
#manage_template => false
#index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
index => "logstash-plop-%{+YYYY.MM.dd}"
}
# stdout { codec => rubydebug }
}
=== Debug
su - logstash -s /bin/bash
# Validation de la conf / vérif de la syntax
/usr/share/logstash/bin/logstash --config.test_and_exit --path.settings /etc/logstash -f /etc/logstash/conf.d/plop.conf -f
# Debug
/usr/share/logstash/bin/logstash --debug --path.settings /etc/logstash -f /etc/logstash/conf.d/plop.conf -f
=== Autres
Notes en vrac
file {
path => "/var/log/apache2/apache.log"
start_position => "beginning"
type => "apache"
}
elasticksearch
-p 9200-e discovery.type=single-node
== Kibana
=== Nginx reverse proxy
''/etc/nginx/sites-available/kibana.acme.fr''
server {
server_name kibana.acme.fr;
root /var/www/html;
location / {
proxy_pass http://127.0.0.1:5601;
include /etc/nginx/proxy_params;
client_max_body_size 10M;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
}
access_log /var/log/nginx/kibana.acme.fr.log;
error_log /var/log/nginx/kibana.acme.fr.err;
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/kibana.acme.fr/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/kibana.acme.fr/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = kibana.acme.fr) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name kibana.acme.fr;
return 404; # managed by Certbot
}
=== Sécurité
''/etc/kibana/kibana.yml''
elasticsearch.username: "elastic"
/usr/share/kibana/bin/kibana-keystore create
/usr/share/kibana/bin/kibana-keystore add elasticsearch.password
== Filebeat
Voir aussi fluentd
Dans certain cas, remplace Logstash
''filebeat.yml''
output.elasticsearch:
hosts: ["http://localhost:9200"]
username: "elastic"
password: "P@ssw0rd"
setup.kibana:
host: "http://localhost:5601"
filebeat modules enable system nginx
filebeat setup
filebeat -e
Il suffit de chercher des Dashboard commençant par "[Filebeat System]" et [Filebeat Nginx]" pour avoir déjà une conf prête à l'emploi
=== Brouillons
filebeat setup -e \
-E output.logstash.enabled=false \
-E output.elasticsearch.hosts=['localhost:9200'] \
-E output.elasticsearch.username=filebeat_internal \
-E output.elasticsearch.password=YOUR_PASSWORD \
-E setup.kibana.host=localhost:5601
filebeat setup -e \
-E 'setup.template.overwrite=true' \
-E 'setup.kibana.host="localhost:5601"' \
-E 'output.logstash.enabled=false' \
-E 'output.elasticsearch.hosts=["localhost:9200"]'
filebeat keystore create
#filebeat keystore add ES_PWD
filebeat keystore add elastic
filebeat keystore list
FIXME