InstallFilebeat has completely replaced Logstash-Forwarder to become the new generation of log collector because it is lighter and more secure. The deployment solution architecture diagram based on Filebeat + ELK is as follows: Software Version:
docker-compose fileversion: "3" services: es-master: container_name: es-master hostname: es-master image: elasticsearch:7.5.1 restart: always ports: - 9200:9200 - 9300:9300 volumes: - ./elasticsearch/master/conf/es-master.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./elasticsearch/master/data:/usr/share/elasticsearch/data - ./elasticsearch/master/logs:/usr/share/elasticsearch/logs environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m" es-slave1: container_name: es-slave1 image: elasticsearch:7.5.1 restart: always ports: - 9201:9200 - 9301:9300 volumes: - ./elasticsearch/slave1/conf/es-slave1.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./elasticsearch/slave1/data:/usr/share/elasticsearch/data - ./elasticsearch/slave1/logs:/usr/share/elasticsearch/logs environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m" es-slave2: container_name: es-slave2 image: elasticsearch:7.5.1 restart: always ports: - 9202:9200 - 9302:9300 volumes: - ./elasticsearch/slave2/conf/es-slave2.yml:/usr/share/elasticsearch/config/elasticsearch.yml - ./elasticsearch/slave2/data:/usr/share/elasticsearch/data - ./elasticsearch/slave2/logs:/usr/share/elasticsearch/logs environment: - "ES_JAVA_OPTS=-Xms512m -Xmx512m" kibana: container_name: kibana hostname: kibana image: kibana:7.5.1 restart: always ports: -5601:5601 volumes: - ./kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml environment: - elasticsearch.hosts=http://es-master:9200 depends_on: -es-master -es-slave1 -es-slave2 # filebeat: # # Container name # container_name: filebeat # # Host name # hostname: filebeat # # Image: docker.elastic.co/beats/filebeat:7.5.1 # # Restart mechanism# restart: always # # Persistent mounts # volumes: # - ./filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml ## Mapped to the container [as a data source] # - ./logs:/home/project/spring-boot-elasticsearch/logs # - ./filebeat/logs:/usr/share/filebeat/logs # - ./filebeat/data:/usr/share/filebeat/data # # Connect the specified container to the current connection. You can set an alias to avoid the situation where the container cannot be connected due to dynamic changes in the IP address. # links: # - logstash # # Dependency service [optional] # depends_on: # - es-master # - es-slave1 # - es-slave2 logstash: container_name: logstash hostname: logstash image: logstash:7.5.1 command: logstash -f ./conf/logstash-filebeat.conf restart: always volumes: # Mapping to the container - ./logstash/conf/logstash-filebeat.conf:/usr/share/logstash/conf/logstash-filebeat.conf - ./logstash/ssl:/usr/share/logstash/ssl environment: - elasticsearch.hosts=http://es-master:9200 # Solve the logstash monitoring connection error - xpack.monitoring.elasticsearch.hosts=http://es-master:9200 ports: -5044:5044 depends_on: -es-master -es-slave1 -es-slave2
es-master.yml # Cluster name cluster.name: es-cluster #Node namenode.name: es-master # Whether it can become a master nodenode.master: true # Whether to allow the node to store data, enabled by default node.data: false # Network binding network.host: 0.0.0.0 # Set the http port for external services http.port: 9200 # Set the TCP port for communication between nodes transport.port: 9300 # Cluster discovery discovery.seed_hosts: -es-master -es-slave1 -es-slave2 # Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes: -es-master # Support cross-domain access http.cors.enabled: true http.cors.allow-origin: "*" # Security authentication xpack.security.enabled: false #http.cors.allow-headers: "Authorization" es-slave1.yml # Cluster name cluster.name: es-cluster #Node namenode.name: es-slave1 # Whether it can become a master nodenode.master: true # Whether to allow the node to store data, enabled by default node.data: true # Network binding network.host: 0.0.0.0 # Set the http port for external services http.port: 9201 # Set the TCP port for communication between nodes #transport.port: 9301 # Cluster discovery discovery.seed_hosts: -es-master -es-slave1 -es-slave2 # Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes: -es-master # Support cross-domain access http.cors.enabled: true http.cors.allow-origin: "*" # Security authentication xpack.security.enabled: false #http.cors.allow-headers: "Authorization" es-slave2.yml # Cluster name cluster.name: es-cluster #Node namenode.name: es-slave2 # Whether it can become a master nodenode.master: true # Whether to allow the node to store data, enabled by default node.data: true # Network binding network.host: 0.0.0.0 # Set the http port for external services http.port: 9202 # Set the TCP port for communication between nodes #transport.port: 9302 # Cluster discovery discovery.seed_hosts: -es-master -es-slave1 -es-slave2 # Manually specify the name or IP address of all nodes that can become master nodes. These configurations will be calculated in the first election cluster.initial_master_nodes: -es-master # Support cross-domain access http.cors.enabled: true http.cors.allow-origin: "*" # Security authentication xpack.security.enabled: false #http.cors.allow-headers: "Authorization" logstash-filebeat.conf input { # Source beats beats { # Port port => "5044" ssl_certificate_authorities => ["/usr/share/logstash/ssl/ca.crt"] ssl_certificate => "/usr/share/logstash/ssl/server.crt" ssl_key => "/usr/share/logstash/ssl/server.key" ssl_verify_mode => "force_peer" } } # Analysis and filtering plugins, multiple filters are possible { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } geoip source => "clientip" } } output { # Select elasticsearch elasticsearch hosts => ["http://es-master:9200"] index => "%{[fields][service]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" } } filebeat.yml filebeat.inputs: - type: log enabled: true paths: # All .log files in the current directory - /root/tmp/logs/*.log fields: service: "our31-java" multiline.pattern: ^\[ multiline.negate: true multiline.match: after - type: log enabled: true paths: # All .log files in the current directory - /root/tmp/log/*.log fields: service: "our31-nginx" filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false # setup.template.settings: # index.number_of_shards: 1 # setup.dashboards.enabled: false # setup.kibana: # host: "http://localhost:5601" # Not directly transferred to ES #output.elasticsearch: # hosts: ["http://es-master:9200"] # index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}" setup.ilm.enabled: false output.logstash: hosts: ["logstash.server.com:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications ssl.certificate_authorities: "./ssl/ca.crt" # Certificate for SSL client authentication ssl.certificate: "./ssl/client.crt" # Client Certificate Key ssl.key: "./ssl/client.key" # processors: # - add_host_metadata: ~ # - add_cloud_metadata: ~ Notice Generate a certificate, configure SSL, and establish SSL between Filebeat and Logstash. #Generate ca private key openssl genrsa 2048 > ca.key #Use ca private key to create ca certificate openssl req -new -x509 -nodes -key ca.key -subj /CN=elkCA\ CA/OU=Development\ group/O=HomeIT\ SIA/DC=elk/DC=com > ca.crt #Generate server csr certificate request file openssl req -newkey rsa:2048 -nodes -keyout server.key -subj /CN=logstash.server.com/OU=Development\ group/O=Home\ SIA/DC=elk/DC=com > server.csr #Use CA certificate and private key to issue server certificate openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 > server.crt #Generate client csr certificate request file openssl req -newkey rsa:2048 -nodes -keyout client.key -subj /CN=filebeat.client.com/OU=Development\ group/O=Home\ SIA/DC=elk/DC=com > client.csr #Use CA certificate and private key to issue client certificate openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 > client.crt Remember to put the certificate in the corresponding folder. The domain name configured in Dynamically generate indexes based on different servers, different services, and different dates In the above picture, some custom attributes are added. These attributes will be passed to Logstash, which will take these attributes and dynamically create indexes in Elasticsearch, as shown below:
I originally wanted to use indices to dynamically generate indexes here, but according to the official configuration, it did not succeed. Can anyone tell me why? Use Nginx Http Basic Authorization to require Kibana to log in First use the tool $ yum -y install httpd-tools Create a new password file Additional user information: Finally, configure Nginx: server { ...... auth_basic "Kibana Auth"; auth_basic_user_file /usr/local/nginx/pwd/kibana/passwd; ...... } How to start Filebeat separately $ nohup ./filebeat 2>&1 & Start Docker Compose Execute in the directory where $ docker-compose up --build -d This is the end of this article about the implementation of one-click ELK deployment with Docker Compose. For more information about Docker Compose ELK deployment, please search for previous articles on 123WORDPRESS.COM or continue to browse the following related articles. I hope you will support 123WORDPRESS.COM in the future! You may also be interested in:
|
<<: MySQL uses covering index to avoid table return and optimize query
>>: Beginners learn some HTML tags (2)
In the previous article, we played with timeouts ...
Copy code The code is as follows: <body <fo...
<meta http-equiv="x-ua-compatible" co...
remember: IDE disk: the first disk is hda, the se...
The official source code of monaco-editor-vue is ...
Table of contents Preface Modifiers of v-model: l...
Quoting Baidu's explanation of pseudo-static:...
Table of contents 1. Hash table principle 2. The ...
Preface tcpdump is a well-known command-line pack...
To beautify the table, you can set different bord...
Password Mode PDO::__construct(): The server requ...
Preface Before, I used cache to highlight the rou...
Table of contents Preface $attrs example: $listen...
Sometimes we want to execute a command in a conta...
The installation and configuration method of MySQ...