Voir :
Voir aussi :
OpenSearch remplace ElasticSearch
Vérif syntax Grok : https://grokdebug.herokuapp.com/
A lire
API
RSYSLOG
Fluentd
NGINX JSON
DOCKER
/etc/elasticsearch/jvm.options.d/mem.options
-Xms512m -Xmx512m
Sécurité : Voir : https://www.elastic.co/guide/en/elasticsearch/reference/7.12/security-minimal-setup.html
/etc/elasticsearch/elasticsearch.yml
xpack.security.enabled: true
Attention, cette commande ne peut s’exécuter qu'une seule fois !
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Voir aussi :
Conf Java Mem
/etc/logstash/jvm.options
## JVM configuration # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space #-Xms1g #-Xmx1g -Xms512m -Xmx512m
Exemple Nginx
Voir : https://www.elastic.co/guide/en/logstash/7.9/logstash-config-for-filebeat-modules.html#parsing-nginx
Note : préférer Filebeat
/etc/logstash/conf.d/nginx-exemple.conf
input { file { path => ["/var/log/nginx/access.log", "/var/log/nginx/error.log"] type => "nginx" } } filter { if [fileset][module] == "nginx" { if [fileset][name] == "access" { grok { match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] } remove_field => "message" } mutate { add_field => { "read_timestamp" => "%{@timestamp}" } } date { match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ] remove_field => "[nginx][access][time]" } useragent { source => "[nginx][access][agent]" target => "[nginx][access][user_agent]" remove_field => "[nginx][access][agent]" } geoip { source => "[nginx][access][remote_ip]" #target => "[nginx][access][geoip]" } } else if [fileset][name] == "error" { grok { match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] } remove_field => "message" } mutate { rename => { "@timestamp" => "read_timestamp" } } date { match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ] remove_field => "[nginx][error][time]" } } } } output { elasticsearch { hosts => localhost #user => elastic #password => PassWord #manage_template => false #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" index => "logstash-plop-%{+YYYY.MM.dd}" } # stdout { codec => rubydebug } }
su - logstash -s /bin/bash # Validation de la conf / vérif de la syntax /usr/share/logstash/bin/logstash --config.test_and_exit --path.settings /etc/logstash -f /etc/logstash/conf.d/plop.conf -f # Debug /usr/share/logstash/bin/logstash --debug --path.settings /etc/logstash -f /etc/logstash/conf.d/plop.conf -f
Notes en vrac
file {
path => "/var/log/apache2/apache.log"
start_position => "beginning"
type => "apache"
}
elasticksearch
-p 9200-e discovery.type=single-node
/etc/nginx/sites-available/kibana.acme.fr
server { server_name kibana.acme.fr; root /var/www/html; location / { proxy_pass http://127.0.0.1:5601; include /etc/nginx/proxy_params; client_max_body_size 10M; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_cache_bypass $http_upgrade; } access_log /var/log/nginx/kibana.acme.fr.log; error_log /var/log/nginx/kibana.acme.fr.err; listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/kibana.acme.fr/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/kibana.acme.fr/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = kibana.acme.fr) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; listen [::]:80; server_name kibana.acme.fr; return 404; # managed by Certbot }
/etc/kibana/kibana.yml
elasticsearch.username: "elastic"
/usr/share/kibana/bin/kibana-keystore create /usr/share/kibana/bin/kibana-keystore add elasticsearch.password
Voir aussi fluentd
Dans certain cas, remplace Logstash
filebeat.yml
output.elasticsearch: hosts: ["http://localhost:9200"] username: "elastic" password: "P@ssw0rd" setup.kibana: host: "http://localhost:5601"
filebeat modules enable system nginx filebeat setup filebeat -e
Il suffit de chercher des Dashboard commençant par “[Filebeat System]” et [Filebeat Nginx]“ pour avoir déjà une conf prête à l'emploi
filebeat setup -e \ -E output.logstash.enabled=false \ -E output.elasticsearch.hosts=['localhost:9200'] \ -E output.elasticsearch.username=filebeat_internal \ -E output.elasticsearch.password=YOUR_PASSWORD \ -E setup.kibana.host=localhost:5601 filebeat setup -e \ -E 'setup.template.overwrite=true' \ -E 'setup.kibana.host="localhost:5601"' \ -E 'output.logstash.enabled=false' \ -E 'output.elasticsearch.hosts=["localhost:9200"]' filebeat keystore create #filebeat keystore add ES_PWD filebeat keystore add elastic filebeat keystore list