1.找出访问排名前十的IP,URL
2.找出10点到12点之间排名前十的IP,URL
3.对比昨天这个时间段访问情况有什么变化
4.对比上个星期同一天同一时间段的访问变化
5.找出搜索引擎访问的次数和每个搜索弓|擎各访问了多少次
6.指定域名的关键链接访问次数,响应时间
7.网站HTTP状态码情况
8.找出攻击者的IP地址,这个IP访问了什么页面,这个IP什么时候来的,什么时候走的共访问了多少次
9.5分钟内告诉为结果
E Elasticsearch java
B Filebeat Go
L Logstash java
K Kibana java
rpm -ivh elasticsearch-7.9.1-x86_64.rpm
cat > /etc/elasticsearch/elasticsearch.yml << ‘EOF‘
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 127.0.0.1,10.0.0.51
http.port: 9200
discovery.seed_hosts: ["10.0.0.51"]
cluster.initial_master_nodes: ["10.0.0.51"]
EOF
systemctl daemon-reload
systemctl start elasticsearch.service
netstat -lntup|grep 9200
curl 127.0.0.1:9200
有问题,停止,清除,启动
systemctl stop elasticsearch.service rm -rf /var/lib/elasticsearch/* systemctl start elasticsearch.service
rpm -ivh kibana-7.9.1-x86_64.rpm
cat > /etc/kibana/kibana.yml << ‘EOF‘
server.port: 5601
server.host: "10.0.0.51"
elasticsearch.hosts: ["http://10.0.0.51:9200"]
kibana.index: ".kibana"
EOF
systemctl start kibana
有问题,停止,清除,启动
systemctl stop kibana.service rm -rf /var/lib/kibana/* # ES删除kibana索引 systemctl start kibana
google浏览器 --> 更多工具 --> 扩展程序 --> 开发者模式 --> 选择解压缩后的插件目录
cat > /etc/yum.repos.d/nginx.repo <<‘EOF‘
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=0
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
EOF
yum install nginx -y
systemctl start nginx
for i in `seq 10`;do curl -I 127.0.0.1 &>/dev/null ;done
rpm -ivh filebeat-7.9.1-x86_64.rpm
cp /etc/filebeat/filebeat.yml /opt/
cat > /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
EOF
有问题,停止,清除,启动
systemctl stop filebeat.service rm -rf /var/lib/filebeat/* systemctl start filebeat.service
systemctl start filebeat
tail -f /var/log/filebeat/filebeat
filebeat-7.9.1-2020.12.29-000001
ilm-history-2-000001
当前日志收集方案的不足:
我们期望的日志收集效果:可以把日志所有字段拆分出来
{
"time_local": "24/Dec/2020:09:43:45 +0800",
"remote_addr": "127.0.0.1",
"referer": "-",
"request": "HEAD / HTTP/1.1",
"status": 200,
"bytes": 0,
"http_user_agent": "curl/7.29.0",
"x_forwarded": "-",
"up_addr": "-",
"up_host": "-",
"upstream_time": "-",
"request_time": "0.000"
}
vi /etc/nginx/nginx.conf
log_format json ‘{ "time_local": "$time_local", ‘
‘"remote_addr": "$remote_addr", ‘
‘"referer": "$http_referer", ‘
‘"request": "$request", ‘
‘"status": $status, ‘
‘"bytes": $body_bytes_sent, ‘
‘"http_user_agent": "$http_user_agent", ‘
‘"x_forwarded": "$http_x_forwarded_for", ‘
‘"up_addr": "$upstream_addr",‘
‘"up_host": "$upstream_http_host",‘
‘"upstream_time": "$upstream_response_time",‘
‘"request_time": "$request_time"‘
‘ }‘;
access_log /var/log/nginx/access.log json;
nginx -t
systemctl restart nginx
> /var/log/nginx/access.log
for i in `seq 10`;do curl -I 127.0.0.1 &>/dev/null ;done
cat /var/log/nginx/access.log
cat > /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
EOF
systemctl restart filebeat
移除之前创建的filebeat-7.9.1
当前日志收集方案的不足:filebeat-7.9.1-2020.12.29-000001
我们期望的日志收集效果:nginx-7.9.1-2020.12
cat > /etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
index: "nginx-%{[agent.version]}-%{+yyyy.MM}"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
systemctl restart filebeat
for i in `seq 10`;do curl -I 127.0.0.1 &>/dev/null ;done
当前日志收集方案的不足:只有访问日志,没有错误日志
我们期望的日志收集效果:
nginx-access-7.9.1-2020.12
nginx-error-7.9.1-2020.12
修改配置文件
以指定项,作为某类索引名的筛选条件
以自定义标签项,作为某类索引名的筛选条件
cat > /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
indices:
- index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
log.file.path: "/var/log/nginx/access.log"
- index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
log.file.path: "/var/log/nginx/error.log"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
cat > /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
indices:
- index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
tags: "access"
- index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
tags: "error"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
systemctl restart filebeat
for i in `seq 10`;do curl -I 127.0.0.1/1 &>/dev/null ;done
ingest 节点可以看作是ES数据前置处理转换的节点,支持 pipeline管道 设置,可以使用 ingest 对数据进行过滤、转换等操作,类似于 logstash 中 filter 的作用,功能相当强大。
Ingest 节点是 Elasticsearch 5.0 新增的节点类型和功能。
Ingest 节点的基础原理,是:节点接收到数据之后,根据请求参数中指定的管道流 id,找到对应的已注册管道流,对数据进行处理,然后将处理过后的数据,按照 Elasticsearch 标准的 indexing 流程继续运行。
Grok 转换语法
127.0.0.1 ==> %{IP:clientip}
- ==> -
- ==> -
[08/Oct/2020:16:34:40 +0800] ==> \\[%{HTTPDATE:nginx.access.time}\\]
"GET / HTTP/1.1" ==> "%{DATA:nginx.access.info}"
200 ==> %{NUMBER:http.response.status_code:long}
5 ==> %{NUMBER:http.response.body.bytes:long}
"-" ==> "(-|%{DATA:http.request.referrer})"
"curl/7.29.0" ==> "(-|%{DATA:user_agent.original})"
"-" ==> "(-|%{IP:clientip})"
Sample Data 通过 Grok Pattern 转换为JSON格式
127.0.0.1 - - [29/Dec/2020:16:30:48 +0800] "HEAD /1 HTTP/1.1" 404 0 "-" "curl/7.29.0" "-"
%{IP:clientip} - - \[%{HTTPDATE:nginx.access.time}\] \"%{DATA:nginx.access.info}\" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} \"(-|%{DATA:http.request.referrer})\" \"(-|%{DATA:user_agent.original})\"
filebaet输出日志,先到ingest节点,通过 pipeline 脚本匹配 Grok Pattern 将普通格式转换为 Json格式,再存入ES。
GET _ingest/pipeline
PUT _ingest/pipeline/pipeline-nginx-access
{
"description" : "nginx access log",
"processors": [
{
"grok": {
"field": "message",
"patterns": ["%{IP:clientip} - - \\[%{HTTPDATE:nginx.access.time}\\] \"%{DATA:nginx.access.info}\" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} \"(-|%{DATA:http.request.referrer})\" \"(-|%{DATA:user_agent.original})\""]
}
},{
"remove": {
"field": "message"
}
}
]
}
vi /etc/nginx/nginx.conf
access_log /var/log/nginx/access.log main;
systemctl restart nginx
> /var/log/nginx/access.log
> /var/log/nginx/error.log
for i in `seq 10`;do curl -I 127.0.0.1/1 &>/dev/null ;done
cat /var/log/nginx/access.log
cat /var/log/nginx/error.log
cat > /etc/filebeat/filebeat.yml << EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
processors:
- drop_fields:
fields: ["ecs","log"]
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
pipelines:
- pipeline: "pipeline-nginx-access"
when.contains:
tags: "access"
indices:
- index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
tags: "access"
- index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
tags: "error"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
systemctl restart filebeat
[root@node-1 soft]# ls /etc/filebeat/modules.d/
activemq.yml.disabled haproxy.yml.disabled nginx.yml.disabled
apache.yml.disabled ibmmq.yml.disabled o365.yml.disabled
auditd.yml.disabled icinga.yml.disabled okta.yml.disabled
aws.yml.disabled iis.yml.disabled osquery.yml.disabled
azure.yml.disabled imperva.yml.disabled panw.yml.disabled
barracuda.yml.disabled infoblox.yml.disabled postgresql.yml.disabled
bluecoat.yml.disabled iptables.yml.disabled rabbitmq.yml.disabled
cef.yml.disabled juniper.yml.disabled radware.yml.disabled
checkpoint.yml.disabled kafka.yml.disabled redis.yml.disabled
cisco.yml.disabled kibana.yml.disabled santa.yml.disabled
coredns.yml.disabled logstash.yml.disabled sonicwall.yml.disabled
crowdstrike.yml.disabled microsoft.yml.disabled sophos.yml.disabled
cylance.yml.disabled misp.yml.disabled squid.yml.disabled
elasticsearch.yml.disabled mongodb.yml.disabled suricata.yml.disabled
envoyproxy.yml.disabled mssql.yml.disabled system.yml.disabled
f5.yml.disabled mysql.yml.disabled tomcat.yml.disabled
fortinet.yml.disabled nats.yml.disabled traefik.yml.disabled
googlecloud.yml.disabled netflow.yml.disabled zeek.yml.disabled
gsuite.yml.disabled netscout.yml.disabled zscaler.yml.disabled
参考文档:Filebeat
cat > /etc/filebeat/filebeat.yml <<EOF
filebeat.config.modules:
path: \${path.config}/modules.d/*.yml
reload.enabled: enable
filebeat.modules:
- module: nginx
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
indices:
- index: "nginx-access-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
log.file.path: "/var/log/nginx/access.log"
- index: "nginx-error-%{[agent.version]}-%{+yyyy.MM}"
when.contains:
log.file.path: "/var/log/nginx/error.log"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
filebeat modules list
filebeat modules enable nginx
filebeat modules list
激活实质上就是重命名:
mv /etc/filebeat/modules.d/nginx.yml.disabled /etc/filebeat/modules.d/nginx.yml
cat > /etc/filebeat/modules.d/nginx.yml << ‘EOF‘
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log"]
error:
enabled: true
var.paths: ["/var/log/nginx/error.log"]
EOF
rm -rf /usr/share/filebeat/module/nginx/ingress_controller
filebeat模块
nginx
下有三套pipeline.yml
规则:[root@node-1 ~]# tree /usr/share/filebeat/module/nginx /usr/share/filebeat/module/nginx |-- access | |-- config | | `-- nginx-access.yml | |-- ingest | | `-- pipeline.yml | `-- manifest.yml |-- error | |-- config | | `-- nginx-error.yml | |-- ingest | | `-- pipeline.yml | `-- manifest.yml |-- ingress_controller | |-- config | | `-- ingress_controller.yml | |-- ingest | | `-- pipeline.yml | `-- manifest.yml `-- module.yml 9 directories, 10 files
systemctl restart filebeat
vi /etc/nginx/nginx.conf
access_log /var/log/nginx/access.log main;
systemctl restart nginx
> /var/log/nginx/access.log
> /var/log/nginx/error.log
for i in `seq 10`;do curl -I 127.0.0.1/1 &>/dev/null ;done
cat /var/log/nginx/access.log
cat /var/log/nginx/error.log
移除之前创建的filebeat-7.9.1
,创建新的索引模式(Index patterns),流程同上
访问排名前十的IP
监控仪表盘:数据实时更新,点击过滤,+ Add filter 排除
类型 | 文件 |
---|---|
查询日志 | general_log |
慢查询日志 | log_slow_queries |
错误日志 | log_error, log_warnings |
二进制日志 | binlog |
中继日志 | relay_log |
慢查询:运行时间超出指定时长的查询。
cd /data/soft
wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.7.30-linux-glibc2.12-x86_64.tar.gz
tar xf mysql-5.7.30-linux-glibc2.12-x86_64.tar.gz -C /opt
ln -s /opt/mysql-5.7.30-linux-glibc2.12-x86_64 /usr/local/mysql
mkdit -p /data/3306/data
mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/data/3306/data
cat > /etc/my.cnf <<EOF
[mysqld]
user=mysql
basedir=/usr/local/mysql
datadir=/data/3306/data
slow_query_log=1
slow_query_log_file=/data/3306/data/slow.log
long_query_time=0.1
log_queries_not_using_indexes
port=3306
socket=/tmp/mysql.sock
[client]
socket=/tmp/mysql.sock
EOF
cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
systemctl enable mysqld
systemctl start mysqld
mysql -e "select sleep(2);"
tail -f /data/3306/data/slow.log
cat > /etc/filebeat/filebeat.yml <<EOF
filebeat.config.modules:
path: \${path.config}/modules.d/*.yml
reload.enabled: enable
filebeat.modules:
- module: mysql
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
indices:
- index: "mysql-slow-%{[agent.version]}-%{+yyyy.MM}"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
注意:
- filebeat的mysql模块匹配的是二进制版的mysql
- mariadb的慢日志和mysql二进制安装的慢日志格式不一样
filebeat modules enable mysql
cat > /etc/filebeat/modules.d/mysql.yml << ‘EOF‘
- module: mysql
access:
enabled: true
var.paths: ["/data/3306/data/slow.log"]
EOF
systemctl restart filebeat
类型 | 文件名 |
---|---|
控制台输出的日志 | catalina.out |
Cataline引擎的日志文件 | catalina.日期.log |
应用初始化的日志 | localhost.日期.log |
访问tomcat的日志 | localhost_access_log.日期 |
Tomcat默认manager应用日志 | manager.日期.log |
cd /data/soft/
rpm -ivh /data/soft/jdk-*.rpm
wget https://mirror.bit.edu.cn/apache/tomcat/tomcat-9/v9.0.41/bin/apache-tomcat-9.0.41.tar.gz
tar xf apache-tomcat-9.0.41.tar.gz -C /opt/
ln -s /opt/apache-tomcat-9.0.41 /opt/tomcat/
vim /opt/tomcat/conf/server.xml
... ...
pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
</Host>
</Engine>
</Service>
</Server>
/opt/tomcat/bin/startup.sh
tail -f /usr/local/tomcat/logs/localhost_access_log.*.txt
cat > /etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/tomcat/logs/localhost_access_log.*.txt
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
index: "tomcat-%{[agent.version]}-%{+yyyy.MM}"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
支持使用通配符*
systemctl restart filebeat
java日志特点:一个报错,多行日志
filebeat多行匹配模式:
multiline.pattern: ‘^\<|^[[:space:]]|^[[:space:]]+(at|\.{3})\b|^Caused by:‘ #正则,自己定义,一个表示可以匹配多种模式使用or 命令也就是“|”
multiline.negate: false #默认是false,匹配pattern的行合并到上一行;true,不匹配pattern的行合并到上一行
multiline.match: after #合并到上一行的末尾或开头
#优化参数
multiline.max_lines: 500 #最多合并500行
multiline.timeout: 5s #5s无响应则取消合并
cat > /etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/elasticsearch/elasticsearch.log
multiline.pattern: ^\[
multiline.negate: true
multiline.match: after
output.elasticsearch:
hosts: ["10.0.0.51:9200"]
index: "es-%{[agent.version]}-%{+yyyy.MM}"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
systemctl restart filebeat
创造Java日志:故意改错elasticsearch配置文件,重启
ES查看索引
注意:
Redis不支持Filebeat模块。
Filebeat只支持Redis单点,不支持传输给Redis哨兵,集群,主从复制。
Filebeat支持Redis列表:多Redis主备,没有负载均衡,恢复自动作为备加入。
vi /etc/nginx/nginx.conf
access_log /var/log/nginx/access.log json;
systemctl restart nginx
> /var/log/nginx/access.log
> /var/log/nginx/error.log
for i in `seq 10`;do curl -I 127.0.0.1/1 &>/dev/null ;done
cat /var/log/nginx/access.log
cat /var/log/nginx/error.log
yum install redis -y
systemctl start redis
cat > /etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
processors:
- drop_fields:
fields: ["ecs","log"]
output.redis:
hosts: ["localhost"]
keys:
- key: "nginx_access"
when.contains:
tags: "access"
- key: "nginx_error"
when.contains:
tags: "error"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
优化:将两个日志放入redis的一个key中
cat > /etc/filebeat/filebeat.yml <<EOF filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log json.keys_under_root: true json.overwrite_keys: true tags: ["access"] - type: log enabled: true paths: - /var/log/nginx/error.log tags: ["error"] processors: - drop_fields: fields: ["ecs","log"] output.redis: hosts: ["localhost"] key: "nginx" setup.ilm.enabled: false setup.template.enabled: false # logout logging.level: info logging.to_files: true logging.files: path: /var/log/filebeat name: filebeat keepfiles: 7 permissions: 0644 EOF
systemctl restart filebeat
[root@node-1 ~]# redis-cli
127.0.0.1:6379> keys *
1) "nginx_error"
2) "nginx_access"
127.0.0.1:6379> type nginx_access
list
127.0.0.1:6379> LLEN nginx_access
(integer) 10
127.0.0.1:6379> LRANGE nginx_access 0 0
1) "{\"@timestamp\":\"2020-12-30T01:21:57.664Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.9.1\"},\"bytes\":0,\"upstream_time\":\"-\",\"request_time\":\"0.000\",\"referer\":\"-\",\"up_host\":\"-\",\"time_local\":\"30/Dec/2020:09:19:08 +0800\",\"tags\":[\"access\"],\"input\":{\"type\":\"log\"},\"up_addr\":\"-\",\"status\":404,\"remote_addr\":\"127.0.0.1\",\"http_user_agent\":\"curl/7.29.0\",\"x_forwarded\":\"-\",\"host\":{\"name\":\"node-1\"},\"agent\":{\"hostname\":\"node-1\",\"ephemeral_id\":\"ee22f8d8-db49-4a93-a2a2-76e971707a39\",\"id\":\"999917a2-5dc1-4495-80f0-bd700385e0ec\",\"name\":\"node-1\",\"type\":\"filebeat\",\"version\":\"7.9.1\"},\"request\":\"HEAD /1 HTTP/1.1\"}"
127.0.0.1:6379> LRANGE nginx_access 0 -1
cd /data/soft/
rpm -ivh logstash-7.9.1.rpm
cat <<EOF >/etc/logstash/conf.d/redis.conf
input {
redis {
host => "localhost"
port => "6379"
db => "0"
key => "nginx_access"
data_type => "list"
}
redis {
host => "localhost"
port => "6379"
db => "0"
key => "nginx_error"
data_type => "list"
}
}
output {
stdout {}
if "access" in [tags] {
elasticsearch {
hosts => "http://10.0.0.51:9200"
manage_template => false
index => "nginx_access-%{+yyyy.MM}"
}
}
if "error" in [tags] {
elasticsearch {
hosts => "http://10.0.0.51:9200"
manage_template => false
index => "nginx_error-%{+yyyy.MM}"
}
}
}
EOF
优化:redis根据tags输出日志
cat <<EOF >/etc/logstash/conf.d/redis.conf input { redis { host => "localhost" port => "6379" db => "0" key => "nginx" data_type => "list" } } filter { mutate { convert => ["upstream_time", "float"] convert => ["request_time", "float"] } } output { stdout {} if "access" in [tags] { elasticsearch { hosts => "http://10.0.0.51:9200" manage_template => false index => "nginx_access-%{+yyyy.MM}" } } if "error" in [tags] { elasticsearch { hosts => "http://10.0.0.51:9200" manage_template => false index => "nginx_error-%{+yyyy.MM}" } } } EOF
# 很慢,就像夯住一样
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf
> /var/log/nginx/access.log
> /var/log/nginx/error.log
for i in `seq 10000`;do curl -I 127.0.0.1/1 &>/dev/null ;done
redis-cli LLEN nginx_access
cat > /etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
processors:
- drop_fields:
fields: ["ecs","log"]
output.redis:
hosts: ["localhost","10.0.0.52"]
key: "nginx"
setup.ilm.enabled: false
setup.template.enabled: false
# logout
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
EOF
cat <<EOF >/etc/logstash/conf.d/redis.conf
input {
redis {
host => "localhost"
port => "6379"
db => "0"
key => "nginx"
data_type => "list"
}
redis {
host => "10.0.0.52"
port => "6379"
db => "0"
key => "nginx"
data_type => "list"
}
}
filter {
mutate {
convert => ["upstream_time", "float"]
convert => ["request_time", "float"]
}
}
output {
stdout {}
if "access" in [tags] {
elasticsearch {
hosts => "http://10.0.0.51:9200"
manage_template => false
index => "nginx_access-%{+yyyy.MM}"
}
}
if "error" in [tags] {
elasticsearch {
hosts => "http://10.0.0.51:9200"
manage_template => false
index => "nginx_error-%{+yyyy.MM}"
}
}
}
EOF
cat <<EOF >>/etc/hosts
10.0.0.51 node-51
10.0.0.52 node-52
10.0.0.53 node-53
EOF
ssh-keygen
ssh-copy-id 10.0.0.51
ssh-copy-id 10.0.0.52
ssh-copy-id 10.0.0.53
cd /data/soft
tar zxf zookeeper-3.4.11.tar.gz -C /opt/
ln -s /opt/zookeeper-3.4.11/ /opt/zookeeper
mkdir -p /data/zookeeper
cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
cat >/opt/zookeeper/conf/zoo.cfg<<EOF
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.0.0.51:2888:3888
server.2=10.0.0.52:2888:3888
server.3=10.0.0.53:2888:3888
EOF
echo "1" > /data/zookeeper/myid
cat /data/zookeeper/myid
rsync -avz /opt/zookeeper* 10.0.0.52:/opt/
rsync -avz /opt/zookeeper* 10.0.0.53:/opt/
rsync -avz jdk-*.rpm 10.0.0.52:/data/soft/
rsync -avz jdk-*.rpm 10.0.0.53:/data/soft/
rpm -ivh /data/soft/jdk-*.rpm
mkdir -p /data/zookeeper
echo "2" > /data/zookeeper/myid
cat /data/zookeeper/myid
rpm -ivh /data/soft/jdk-*.rpm
mkdir -p /data/zookeeper
echo "3" > /data/zookeeper/myid
cat /data/zookeeper/myid
/opt/zookeeper/bin/zkServer.sh start
/opt/zookeeper/bin/zkServer.sh status
# 一个节点上创建一个频道
/opt/zookeeper/bin/zkCli.sh -server 10.0.0.51:2181
create /test "hello"
# 其他节点上看能否接收到
/opt/zookeeper/bin/zkCli.sh -server 10.0.0.52:2181
get /test
cd /data/soft/
tar zxf kafka_2.11-1.0.0.tgz -C /opt/
ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
mkdir /opt/kafka/logs
cat >/opt/kafka/config/server.properties<<EOF
broker.id=1
listeners=PLAINTEXT://10.0.0.51:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
EOF
rsync -avz /opt/kafka* 10.0.0.52:/opt/
rsync -avz /opt/kafka* 10.0.0.53:/opt/
sed -i "s#10.0.0.51:9092#10.0.0.52:9092#g" /opt/kafka/config/server.properties
sed -i "s#broker.id=1#broker.id=2#g" /opt/kafka/config/server.properties
sed -i "s#10.0.0.51:9092#10.0.0.53:9092#g" /opt/kafka/config/server.properties
sed -i "s#broker.id=1#broker.id=3#g" /opt/kafka/config/server.properties
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
jps
# 创建topic
/opt/kafka/bin/kafka-topics.sh --create --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --partitions 3 --replication-factor 3 --topic kafkatest
# 获取toppid
/opt/kafka/bin/kafka-topics.sh --describe --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --topic kafkatest
# 删除topic
/opt/kafka/bin/kafka-topics.sh --delete --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --topic kafkatest
# 1.创建topic
/opt/kafka/bin/kafka-topics.sh --create --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --partitions 3 --replication-factor 3 --topic messagetest
# 2.发送消息
/opt/kafka/bin/kafka-console-producer.sh --broker-list 10.0.0.51:9092,10.0.0.52:9092,10.0.0.53:9092 --topic messagetest
# 3.其他节点测试接收
/opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181 --topic messagetest --from-beginning
# 4.测试获取所有的频道
/opt/kafka/bin/kafka-topics.sh --list --zookeeper 10.0.0.51:2181,10.0.0.52:2181,10.0.0.53:2181
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
cat >/etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
output.kafka:
hosts: ["10.0.0.51:9092", "10.0.0.52:9092", "10.0.0.53:9092"]
topic: ‘filebeat‘
setup.ilm.enabled: false
setup.template.enabled: false
EOF
systemctl restart filebeat
cat >/etc/logstash/conf.d/kafka.conf <<EOF
input {
kafka{
bootstrap_servers=>["10.0.0.51:9092,10.0.0.52:9092,10.0.0.53:9092"]
topics=>["filebeat"]
#group_id=>"logstash"
codec => "json"
}
}
output {
stdout {}
if "access" in [tags] {
elasticsearch {
hosts => "http://10.0.0.51:9200"
manage_template => false
index => "nginx_access-%{+yyyy.MM}"
}
}
if "error" in [tags] {
elasticsearch {
hosts => "http://10.0.0.51:9200"
manage_template => false
index => "nginx_error-%{+yyyy.MM}"
}
}
}
EOF
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka.conf
> /var/log/nginx/access.log
> /var/log/nginx/error.log
for i in `seq 100`;do curl -I 127.0.0.1/1 &>/dev/null ;done
方案一:将filebeat和业务容器放在同一个POD中,日志挂载到同一个空目录。
方案二:使用DaemonSet资源,在每个k8s节点上收集指定容器的日志。
原文:https://www.cnblogs.com/backups/p/EBLK_2.html