日志審計服務器搭建,RabbitMQ + ELK 搭建日志平臺

 2023-11-16 阅读 20 评论 0

摘要:CentOS下使用ELK套件搭建日志分析和監控平臺 2015年01月30日 17:32:29?i_chips?閱讀數:24252 https://blog.csdn.net/i_chips/article/details/43309415 1?概述 ELK套件(ELK stack)是指ElasticSearch、Logstash和Kibana三件套。這三個軟件可以組成一套日

CentOS下使用ELK套件搭建日志分析和監控平臺

2015年01月30日 17:32:29?i_chips?閱讀數:24252

https://blog.csdn.net/i_chips/article/details/43309415

1?概述

ELK套件(ELK stack)是指ElasticSearch、Logstash和Kibana三件套。這三個軟件可以組成一套日志分析和監控工具。

日志審計服務器搭建?由于三個軟件各自的版本號太多,建議采用ElasticSearch官網推薦的搭配組合:http://www.elasticsearch.org/overview/elkdownloads/

2?環境準備

2.1?軟件要求

本文把ELK套件部署在一臺CentOS單機上。

具體的版本要求如下:

  • 操作系統版本:CentOS 6.4;
  • JDK版本:1.7.0;
  • Logstash版本:1.4.2;
  • ElasticSearch版本:1.4.2;
  • Kibana版本:3.1.2;

2.2?防火墻配置

為了正常使用HTTP服務等,需要關閉防火墻:

# service iptables stop
# systemctl stop firewalld.service

nginx查看日志?或者可以不關閉防火墻,但是要在iptables中打開相關的端口:

# vim /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9200 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9292 -j ACCEPT
# service iptables restart

3?安裝JDK

ElasticSearch和Logstash依賴于JDK,所以需要安裝JDK:

# yum -y install java-1.7.0-openjdk*
# java -version

4?安裝ElasticSearch

ElasticSearch默認的對外服務的HTTP端口是9200,節點間交互的TCP端口是9300。

下載ElasticSearch:

# mkdir -p /opt/software && cd /opt/software
# sudo wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.tar.gz
# sudo tar -zxvf elasticsearch-1.4.2.tar.gz -C /usr/local/
# ln -s /usr/local/elasticsearch-1.4.2 /usr/local/elasticsearch

架設日志服務器,安裝elasticsearch-servicewrapper,并啟動ElasticSearch服務:

# sudo wget https://github.com/elasticsearch/elasticsearch-servicewrapper/archive/master.tar.gz
# sudo tar -zxvf master.tar.gz
# mv /opt/software/elasticsearch-servicewrapper-master/service /usr/local/elasticsearch/bin/
# /usr/local/elasticsearch/bin/service/elasticsearch start

測試ElasticSearch服務是否正常,預期返回200的狀態碼:

# curl -X GET http://localhost:9200

5?安裝Logstash

Logstash默認的對外服務的端口是9292。

下載Logstash:

# sudo wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
# sudo tar -zxvf logstash-1.4.2.tar.gz -C /usr/local/
# ln -s /usr/local/logstash-1.4.2 /usr/local/logstash

搭建,簡單測試Logstash服務是否正常,預期可以將輸入內容以簡單的日志形式打印在界面上:

# /usr/local/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

創建Logstash配置文件,并再次測試Logstash服務是否正常,預期可以將輸入內容以結構化的日志形式打印在界面上:

# mkdir -p /usr/local/logstash/etc
# vim /usr/local/logstash/etc/hello_search.conf
input {stdin {type => "human"}
}output {stdout {codec => rubydebug}elasticsearch {host => "10.111.121.22"port => 9300}
}
# /usr/local/logstash/bin/logstash -f /usr/local/logstash/etc/hello_search.conf

6?安裝Kibana

CentOS默認預裝了Apache,所以將Kibana的代碼直接拷貝到Apache可以訪問的目錄下即可。

# sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
# sudo tar -zxvf kibana-3.1.2.tar.gz
# mv kibana-3.1.2 /var/www/html/kibana

修改Kibana的配置文件,把elasticsearch所在行的內容替換成如下:

# vim /var/www/html/kibana/config.js
elasticsearch: "http://10.111.121.22:9200",

rabbitmq自動創建隊列?啟動一下HTTP服務:

# service httpd start

修改ElasticSearch的配置文件,追加兩行內容,并重啟ElasticSearch服務:

# vim /usr/local/elasticsearch/config/elasticsearch.yml
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
# /usr/local/elasticsearch/bin/service/elasticsearch restart

然后就可以通過瀏覽器訪問Kibana了:

http://10.111.121.22/kibana

現在,在之前的Logstash會話中輸入任意字符,就可以在Kibana中查看到日志情況。

7?配置Logstash

rabbitmq docker、再次創建Logstash配置文件,這里將HTTP日志和文件系統日志作為輸入,輸出直接傳給ElasticSearch,不再打印在界面上:

# vim /usr/local/logstash/etc/logstash_agent.conf
input {file {type => "http.access"path => ["/var/log/httpd/access_log"]}file {type => "http.error"path => ["/var/log/httpd/error_log"]}file {type => "messages"path => ["/var/log/messages"]}
}output {elasticsearch {host => "10.111.121.22"port => 9300}
}
# nohup /usr/local/logstash/bin/logstash -f /usr/local/logstash/etc/logstash_agent.conf &

現在,一個簡單的日志分析和監控平臺就搭建好了,可以使用Kibana進行查看。

8?參考資料

1. 《安裝logstash,elasticsearch,kibana三件套》,http://www.cnblogs.com/yjf512/p/4194012.html

小白都會超詳細--ELK日志管理平臺搭建教程

2018年07月15日 13:35:52?li123128?閱讀數:2091

安裝rabbitmq、https://blog.csdn.net/li123128/article/details/81052374

一、介紹

二、安裝JDK

三、安裝Elasticsearch

rabbitmq日志、四、安裝Logstash

五、安裝Kibana

六、Kibana簡單使用

系統環境:CentOS Linux release 7.4.1708 (Core)

當前問題狀況

  1. 開發人員不能登錄線上服務器查看詳細日志。
  2. 各個系統都有日志,日志數據分散難以查找。
  3. 日志數據量大,查詢速度慢,或者數據不夠實時。

一、介紹

1、組成

ELK由Elasticsearch、Logstash和Kibana三部分組件組成;
Elasticsearch是個開源分布式搜索引擎,它的特點有:分布式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。
Logstash是一個完全開源的工具,它可以對你的日志進行收集、分析,并將其存儲供以后使用
kibana 是一個開源和免費的工具,它可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數據日志。


2、四大組件
Logstash: logstash server端用來搜集日志;
Elasticsearch: 存儲各類日志;
Kibana: web化接口用作查尋和可視化日志;
Logstash Forwarder: logstash client端用來通過lumberjack 網絡協議發送日志到logstash server;

3、工作流程

在需要收集日志的所有服務上部署logstash,作為logstash agent(logstash shipper)用于監控并過濾收集日志,將過濾后的內容發送到Redis,然后logstash indexer將日志收集在一起交給全文搜索服務ElasticSearch,可以用ElasticSearch進行自定義搜索通過Kibana 來結合自定義搜索進行頁面展示。

下面是在兩臺節點上都安裝一下環境。

二、安裝JDK

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

配置阿里源:wget -O?/etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum clean all

yum makecache

Logstash的運行依賴于Java運行環境,Elasticsearch 要求至少 Java 7。

[root@controller ~]# yum install java-1.8.0-openjdk -y

[root@controller ~]# java -version

openjdk version?"1.8.0_151"

OpenJDK Runtime Environment (build 1.8.0_151-b12)

OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)

1、關閉防火墻

systemctl stop firewalld.service?#停止firewall

systemctl disable firewalld.service?#禁止firewall開機啟動

2、關閉selinux

sed?-i?'s/SELINUX=enforcing/SELINUX=disabled/g'?/etc/selinux/config

setenforce 0

?三、安裝Elasticsearch

基礎環境安裝(elk-node1和elk-node2同時操作)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

1)下載并安裝GPG Key

[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

?

2)添加yum倉庫

[root@elk-node1 ~]# vim /etc/yum.repos.d/elasticsearch.repo

[elasticsearch-2.x]

name=Elasticsearch repository?for?2.x packages

baseurl=http://packages.elastic.co/elasticsearch/2.x/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

?

3)安裝elasticsearch

[root@elk-node1 ~]# yum install www.taohuayuan178.com-y elasticsearch

4)添加自啟動

chkconfig --add elasticsearch

5)啟動命令

systemctl daemon-reload

systemctl?enable?elasticsearch.service

6)修改配置

[root@elk-node1 ~]# cd /etc/elasticsearch/

[root@elk-node1 elasticsearch]# ls

elasticsearch.yml? logging.yml? scripts

[root@elk-node1 elasticsearch]# cp elasticsearch.yml{,.bak}

[root@elk-node1 elasticsearch]# mkdir -p /data/es-data

[root@elk-node1 elasticsearch]# vim elasticsearch.yml

[root@elk-node1 elasticsearch]# grep www.leyou2.net'^[a-z]' elasticsearch.yml

cluster.name: hejianlai???????????????//集群名稱

node.name: elk-node1??????????????????//節點名稱

path.data:?/data/es-data??????????????//數據存放目錄

path.logs:?/var/log/elasticsearch/????//日志存放目錄

bootstrap.memory_lock:?true???????????//打開內存

network.host: 0.0.0.0?????????????????//監聽網絡

http.port: 9200???????????????????????//端口

discovery.zen.ping.multicast.enabled:?false????????????????????//改為單播

discovery.zen.ping.unicast.hosts: ["192.168.247.135",?"192.168.247.133"]

[root@elk-node1 elasticsearch]# systemctl start elasticsearch

You have new mail?in?/var/spool/mail/root

[root@elk-node1 elasticsearch]# systemctl status elasticsearch

● elasticsearch.service - Elasticsearch

???Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)

???Active: failed (Result:?exit-code) since www.dongfan178.com/ Thu 2018-07-12 22:00:47 CST; 9s ago

?????Docs: http://www.elastic.co

??Process: 22333 ExecStart=/usr/share/elasticsearch/bin/elasticsearch?-Des.pidfile=${PID_DIR}/elasticsearch.pid -Des.default.path.home=${ES_HOME} -Des.default.path.logs=${LOG_DIR} -Des.default.path.data=${DATA_DIR} -Des.default.path.conf=${CONF_DIR} (code=exited, status=1/FAILURE)

??Process: 22331 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec?(code=exited, status=0/SUCCESS)

?Main PID: 22333 (code=exited, status=1/FAILURE)

?

Jul 12 22:00:47 elk-node1 elasticsearch[22333]: at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)

Jul 12 22:00:47 elk-node1 elasticsearch[22333]:www.mhylpt.com at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)

Jul 12 22:00:47 elk-node1 elasticsearch[22333]: at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)

Jul 12 22:00:47 elk-node1 elasticsearch[22333]:www.feifanyule.cn/ at java.nio.file.Files.createDirectory(Files.java:674)

Jul 12 22:00:47 elk-node1 elasticsearch[22333]:www.078881.cn ?? at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)

Jul 12 22:00:47 elk-node1 elasticsearch[22333]: at java.nio.file.Files.createDirectories(Files.java:767)

Jul 12 22:00:47 elk-node1 elasticsearch[22333]: at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:337)

Jul 12 22:00:47 elk-node1 systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE

Jul 12 22:00:47 elk-node1 systemd[1]: Unit elasticsearch.service entered failed state.

Jul 12 22:00:47 elk-node1 systemd[1]: elasticsearch.service failed.

[root@elk-node1 elasticsearch]# cd /var/log/elasticsearch/

[root@elk-node1 elasticsearch]# ll

total 4

-rw-r--r-- 1 elasticsearch elasticsearch??? 0 Jul 12 22:00 hejianlai_deprecation.log

-rw-r--r-- 1 elasticsearch elasticsearch??? 0 Jul 12 22:00 hejianlai_index_indexing_slowlog.log

-rw-r--r-- 1 elasticsearch elasticsearch??? 0 Jul 12 22:00 hejianlai_index_search_slowlog.log

-rw-r--r-- 1 elasticsearch elasticsearch 2232 Jul 12 22:00 hejianlai.log

[root@elk-node1 elasticsearch]# tail hejianlai.log

????at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)

????at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)

????at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)

????at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)

????at java.nio.file.Files.createDirectory(Files.java:674)

????at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)

????at java.nio.file.Files.createDirectories(Files.java:767)

????at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:337)

????at org.elasticsearch.bootstrap.Security.addPath(Security.java:314)

????... 7?more

[root@elk-node1 elasticsearch]# less hejianlai.log

You have new mail?in?/var/spool/mail/root

[root@elk-node1 elasticsearch]# grep elas /etc/passwd

elasticsearch:x:991:988:elasticsearch user:/home/elasticsearch:/sbin/nologin

#報錯/data/es-data沒權限,賦權限即可

[root@elk-node1 elasticsearch]# chown -R elasticsearch:elasticsearch /data/es-data/

[root@elk-node1 elasticsearch]# systemctl start elasticsearch

[root@elk-node1 elasticsearch]# systemctl status elasticsearch

● elasticsearch.service - Elasticsearch

???Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)

???Active: active (running) since Thu 2018-07-12 22:03:28 CST; 4s ago

?????Docs: http://www.elastic.co

??Process: 22398 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec?(code=exited, status=0/SUCCESS)

?Main PID: 22400 (java)

???CGroup:?/system.slice/elasticsearch.service

???????????└─22400?/bin/java?-Xms256m -Xmx1g -Djava.awt.headless=true?-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMe...

?

Jul 12 22:03:29 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:29,739][WARN ][bootstrap??????????????? ] If you are logged?in?interactively, you will have to re-login?for?the new limits to take effect.

Jul 12 22:03:29 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:29,899][INFO ][node???????????????????? ] [elk-node1] version[2.4.6], pid[22400], build[5376dca/2017-07-18T12:17:44Z]

Jul 12 22:03:29 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:29,899][INFO ][node???????????????????? ] [elk-node1] initializing ...

Jul 12 22:03:30 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:30,644][INFO ][plugins????????????????? ] [elk-node1] modules [reindex, lang-expression, lang-groovy], plugins [], sites []

Jul 12 22:03:30 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:30,845][INFO ][env??????????????????????] [elk-node1] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [1.7gb], n...types [rootfs]

Jul 12 22:03:30 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:30,845][INFO ][env??????????????????????] [elk-node1] heap size [1007.3mb], compressed ordinary object pointers [true]

Jul 12 22:03:33 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:33,149][INFO ][node???????????????????? ] [elk-node1] initialized

Jul 12 22:03:33 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:33,149][INFO ][node???????????????????? ] [elk-node1] starting ...

Jul 12 22:03:33 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:33,333][INFO ][transport??????????????? ] [elk-node1] publish_address {192.168.247.135:9300}, bound_addresses {[::]:9300}

Jul 12 22:03:33 elk-node1 elasticsearch[22400]: [2018-07-12 22:03:33,345][INFO ][discovery??????????????? ] [elk-node1] hejianlai/iUUTEKhyTxyL78aGtrrBOw

Hint: Some lines were ellipsized, use -l to show?in?full.

?訪問地址:http://192.168.247.135:9200/

?

安裝ES插件

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

#統計索引數

[root@elk-node1 ~]# curl -i -XGET 'http://192.168.247.135:9200/_count?pretty' -d '

>?"query": {

>??????"match_all":{}

> }'

HTTP/1.1 200 OK

Content-Type: application/json; charset=UTF-8

Content-Length: 95

?

{

??"count"?: 0,

??"_shards"?: {

????"total"?: 0,

????"successful"?: 0,

????"failed"?: 0

??}

}

#es插件,收費的不建議使用(這個不安裝)

[root@elk-node1 bin]# /usr/share/elasticsearch/bin/plugin install marvel-agent

?

#安裝開源的elasticsearch-head插件

[root@elk-node1 bin]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

-> Installing mobz/elasticsearch-head...

Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...

Downloading ...........................................................................................................................................DONE

Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums?if?available ...

NOTE: Unable to verify checksum?for?downloaded plugin (unable to?find?.sha1 or .md5?file?to verify)

?訪問地址:http://192.168.247.135:9200/_plugin/head/

使用POST方法創建查詢

使用GET方法查詢數據

基本查詢

elk-node2配置

1

2

3

4

5

6

7

8

9

10

[root@elk-node2 elasticsearch]# grep '^[a-z]' /etc/elasticsearch/elasticsearch.yml

cluster.name: hejianlai

node.name: elk-node2

path.data:?/data/es-data

path.logs:?/var/log/elasticsearch/

bootstrap.memory_lock:?true

network.host: 0.0.0.0

http.port: 9200

discovery.zen.ping.multicast.enabled:?false

discovery.zen.ping.unicast.hosts: ["192.168.247.135",?"192.168.247.133"]

?在構建Elasticsearch(ES)多節點集群的時候,通常情況下只需要將elasticsearch.yml中的cluster.name設置成相同即可,ES會自動匹配并構成集群。但是很多時候可能由于不同的節點在不同的網段下,導致無法自動獲取集群。此時可以將啟用單播,顯式指定節點的發現。具體做法是在elasticsearch.yml文件中設置如下兩個參數:

1

2

discovery.zen.ping.multicast.enabled:?false

discovery.zen.ping.unicast.hosts: ["192.168.247.135",?"192.168.247.133"]

重啟elk-node1,并開啟elk-node2,訪問:192.168.247.135:9200/_plugin/head/

?安裝監控kopf

1

2

3

4

5

6

7

[root@elk-node1 ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

-> Installing lmenezes/elasticsearch-kopf...

Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...

Downloading .........................................................................................................................DONE

Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums?if?available ...

NOTE: Unable to verify checksum?for?downloaded plugin (unable to?find?.sha1 or .md5?file?to verify)

Installed kopf into?/usr/share/elasticsearch/plugins/kopf

?訪問地址:http://192.168.247.135:9200/_plugin/kopf/#!/cluster

四、安裝Logstash

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

1)下載并安裝GPG Key

[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

?

2)添加yum倉庫

[root@elk-node1 ~]# vim /etc/yum.repos.d/logstash.repo

[logstash-2.1]

name=Logstash repository?for?2.1.x packages

baseurl=http://packages.elastic.co/logstash/2.1/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

?

3)安裝logstash

[root@elk-node1 ~]# yum install -y logstash

?

4)測試數據

#簡單的輸入輸出

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'

Settings: Default filter workers: 1

Logstash startup completed

hello world

2018-07-13T00:54:34.497Z elk-node1 hello world

hi hejianlai

2018-07-13T00:54:44.453Z elk-node1 hi hejianlai

來賓張家輝

2018-07-13T00:55:35.278Z elk-node1 來賓張家輝

^CSIGINT received. Shutting down the pipeline. {:level=>:warn}

?

Logstash?shutdown?completed

#可以使用rubydebug詳細輸出

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug } }'

Settings: Default filter workers: 1

Logstash startup completed

mimi

{

???????"message"?=>?"mimi",

??????"@version"?=>?"1",

????"@timestamp"?=>?"2018-07-13T00:58:59.980Z",

??????????"host"?=>?"elk-node1"

}

#內容寫進elasticsearch中

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch{hosts=>["192.168.247.135"]} }'

Settings: Default filter workers: 1

Logstash startup completed

I love you

1232

hejianlai

渣渣輝

^CSIGINT received. Shutting down the pipeline. {:level=>:warn}

Logstash?shutdown?completed

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.247.135:9200"]} stdout{ codec => rubydebug}}'

Settings: Default filter workers: 1

Logstash startup completed

廣州

{

???????"message"?=>?"廣州",

??????"@version"?=>?"1",

????"@timestamp"?=>?"2018-07-13T02:17:40.800Z",

??????????"host"?=>?"elk-node1"

}

hehehehehehehe

{

???????"message"?=>?"hehehehehehehe",

??????"@version"?=>?"1",

????"@timestamp"?=>?"2018-07-13T02:17:49.400Z",

??????????"host"?=>?"elk-node1"

}

?

logstash日志收集配置文件編寫

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

#交換式輸入信息

[root@elk-node1 ~]# cat /etc/logstash/conf.d/logstash-01.conf

input { stdin { } }

output {

????????elasticsearch { hosts => ["192.168.247.135:9200"]}

????????stdout { codec => rubydebug }

}

執行命令

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-01.conf

Settings: Default filter workers: 1

Logstash startup completed

l o?v?e

{

???????"message"?=>?"l o v e",

??????"@version"?=>?"1",

????"@timestamp"?=>?"2018-07-13T02:37:42.670Z",

??????????"host"?=>?"elk-node1"

}

地久梁朝偉

{

???????"message"?=>?"地久梁朝偉",

??????"@version"?=>?"1",

????"@timestamp"?=>?"2018-07-13T02:38:20.049Z",

??????????"host"?=>?"elk-node1"

}

#收集系統日志

[root@elk-node1 conf.d]# cat /etc/logstash/conf.d/systemlog.conf

input{

????file?{

????path =>?"/var/log/messages"

????type?=>?"sysstem"

????start_position =>?"beginning"

????}

}

output{

????elasticsearch{

????hosts => ["192.168.247.135:9200"]

????index =>?"systemlog-%{+YYYY.MM.dd}"

????}

}

#放在后臺執行

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/systemlog.conf &

?

收集elk錯誤日志配置文件編寫

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

[root@elk-node1 ~]# cat /etc/logstash/conf.d/elk_log.conf

input {

????file?{

??????path =>?"/var/log/messages"

??????type?=>?"system"

??????start_position =>?"beginning"

????}

}

input {

????file?{

???????path =>?"/var/log/elasticsearch/hejianlai.log"

???????type?=>?"es-error"

???????start_position =>?"beginning"

???????codec => multiline {

??????????pattern =>?"^\["?????????????????????????????????//正則匹配[開頭的為一個事件

??????????negate =>?true

??????????what =>?"previous"

????????}

????}

}

output {

??

????if?[type] ==?"system"{

????????elasticsearch {

???????????hosts => ["192.168.247.135:9200"]

???????????index =>?"systemlog-%{+YYYY.MM.dd}"

????????}

????}

??

????if?[type] ==?"es-error"{

????????elasticsearch {

???????????hosts => ["192.168.247.135:9200"]

???????????index =>?"es-error-%{+YYYY.MM.dd}"

????????}

????}

}

#放入后臺運行

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/elk_log.conf &

[1] 28574

You have new mail?in?/var/spool/mail/root

[root@elk-node1 ~]# Settings: Default filter workers: 1

Logstash startup completed

?

?五、安裝Kibana

官方下載地址:https://www.elastic.co/downloads/kibana

官方最新的版本出來了6.3.1太新了,下載后出現很多坑后來就下了4.3.1的·先用著吧~~~

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

1)kibana的安裝:

[root@elk-node1?local]# cd /usr/local/

[root@elk-node1?local]# wget https://artifacts.elastic.co/downloads/kibana/kibana-4.3.1-linux-x64.tar.gz

[root@elk-node1?local]# tar -xf kibana-4.3.1-linux-x64.tar.gz

[root@elk-node1?local]# ln -s /usr/local/kibana-4.3.1-linux-x64 /usr/local/kibana

[root@elk-node1?local]# cd kibana

[root@elk-node1 kibana]# ls

bin? config? installedPlugins? LICENSE.txt? node? node_modules? optimize? package.json? README.txt? src? webpackShims

2)修改配置文件:

[root@elk-node1 kibana]# cd config/

[root@elk-node1 config]# pwd

/usr/local/kibana/config

[root@elk-node1 config]# vim kibana.yml

You have new mail?in?/var/spool/mail/root

[root@elk-node1 config]# grep -Ev "^#|^$" kibana.yml

server.port: 5601

server.host:?"0.0.0.0"

elasticsearch.url:?"http://192.168.247.135:9200"

kibana.index:?".kibana"

3)screen是一個全屏窗口管理器,它在幾個進程(通常是交互式shell)之間復用物理終端。每個虛擬終端提供DEC VT100的功能。

[root@elk-node1?local]# yum install -y screen

4)啟動screen命令后運行kibana最后按ctrl+a+d組合鍵讓其在單獨的窗口里運行。

[root@elk-node1 config]# screen

[root@elk-node1 config]# /usr/local/kibana/bin/kibana

??log?? [02:23:34.148] [info][status][plugin:kibana@6.3.1] Status changed from uninitialized to green - Ready

??log?? [02:23:34.213] [info][status][plugin:elasticsearch@6.3.1] Status changed from uninitialized to yellow - Waiting?for?Elasticsearch

??log?? [02:23:34.216] [info][status][plugin:xpack_main@6.3.1] Status changed from uninitialized to yellow - Waiting?for?Elasticsearch

??log?? [02:23:34.221] [info][status][plugin:searchprofiler@6.3.1] Status changed from uninitialized to yellow - Waiting?for?Elasticsearch

??log?? [02:23:34.224] [info][status][plugin:ml@6.3.1] Status changed from uninitialized to yellow - Waiting?for?Elasticsearch

[root@elk-node1 config]# screen -ls

There are screens on:

????29696.pts-0.elk-node1??? (Detached)

[root@elk-node1 kibana]# /usr/local/kibana/bin/kibana

??log?? [11:25:37.557] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready

??log?? [11:25:37.585] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting?for?Elasticsearch

??log?? [11:25:37.600] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready

??log?? [11:25:37.602] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready

??log?? [11:25:37.604] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready

??log?? [11:25:37.606] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready

??log?? [11:25:37.608] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready

??log?? [11:25:37.612] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready

??log?? [11:25:37.647] [info][listening] Server running at http://0.0.0.0:5601

?六、kibana簡單使用

訪問kibana地址:http://192.168.247.135:5601

?第一次登錄我們創建一個elk的es-error索引

?

?添加message和path字段

運用搜索欄功能,我們搜soft關鍵字

我們在添加之前寫的systemlog索引

*為正則匹配

好啦~~到此為止ELK日志平臺搭建基本搞掂,,,,累得一筆,,后續可以根據需求編寫好需要監控的應用文件添加到kibana上即可。

ELK實戰系列3-RABBITMQ+ELK搭建日志平臺

本文記錄了一次日志平臺的搭建。主要場景如下:

1. 應用將日志發送給RabbitMQ

2. Logstash連接到RabbitMQ抽取日志

3. Logstash將抽取的日志內容做一些加工,然后存入到Elasticsearch中

4. Kibana連接到Elasticsearch,提供日志查詢、展現等功能。

整個過程圖形表示如下:

1.先下載要用到的Docker鏡像文件

  1. [root@14-28 pipeline]# docker images
  2. REPOSITORY TAG IMAGE ID CREATED SIZE
  3. rabbitmq 3.6-management f2e38e79371c 2 months ago 149MB
  4. docker.elastic.co/logstash/logstash 6.2.4 00a38ba5444c 3 months ago 657MB
  5. docker.elastic.co/kibana/kibana 6.2.4 327c6538ba4c 3 months ago 933MB
  6. docker.elastic.co/elasticsearch/elasticsearch 6.2.4 7cb69da7148d 3 months ago 515MB

注意,rabbitmq是從docker官方鏡像倉庫拉取的:https://store.docker.com/images/rabbitmq

Elasticsearch相關的3個鏡像是從Elasticsearch官網拉取的:https://www.docker.elastic.co/

2.準備容器編排文件

這里用到的docker-compose來編排容器,文件如下:

  1. root@Ubuntusvr1:~/elk# cat docker-compose.yml
  2. version: '2'
  3. services:
  4. rabbitmq:
  5. image: rabbitmq:3.6-management
  6. ports:
  7. - "5672:5672"
  8. - "15672:15672"
  9. container_name:
  10. rabbitmq-ichub
  11. hostname:
  12. rabbitmq-ichub
  13. environment:
  14. - "RABBITMQ_DEFAULT_USER=dev"
  15. - "RABBITMQ_DEFAULT_PASS=123456"
  16. ?
  17. elasticsearch:
  18. image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
  19. ports:
  20. - "9200:9200"
  21. - "9300:9300"
  22. container_name:
  23. elasticsearch
  24. environment:
  25. - "xpack.security.enabled=false"
  26. - "discovery.type=single-node"
  27. - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
  28. ?
  29. kibana:
  30. image: docker.elastic.co/kibana/kibana:6.2.4
  31. container_name:
  32. kibana
  33. depends_on:
  34. - elasticsearch
  35. ports:
  36. - "5601:5601"
  37. links:
  38. - elasticsearch
  39. environment:
  40. - "xpack.security.enabled=false"
  41. ?
  42. logstash:
  43. image: docker.elastic.co/logstash/logstash:6.2.4
  44. container_name:
  45. logstash
  46. depends_on:
  47. - elasticsearch
  48. - rabbitmq
  49. ports:
  50. - "25826:25826"
  51. links:
  52. - elasticsearch
  53. - rabbitmq
  54. volumes:
  55. - $PWD/logstashpipeline:/usr/share/logstash/pipeline

這里要注意的是,創建rabbitmq容器必須指定hostname參數,因為此鏡像是基于NodeName(默認等于hostname)來存儲數據的。

loastash容器,我們通過卷指定了配置文件的路徑,從而控制容器使用我們的管道配置文件。

另外,這里也指定了depends_on參數,因為容器之間存在依賴項,比如要等RabbitMQ容器起來之后,Logstash容器才能連接到隊列上。所以Logstash容器需要依賴RabbitMQ容器。

3.配置logstash從rabbitmq抽取數據。

  1. root@Ubuntusvr1:~/elk# cat logstashpipeline/rabbitmq.conf
  2. input{
  3. rabbitmq {
  4. host => "rabbitmq"
  5. exchange => "ichub_log_exchange"
  6. exchange_type => "topic"
  7. key => "#"
  8. queue => "ichub_log"
  9. heartbeat => 30
  10. durable => true
  11. password => "123456"
  12. user => "dev"
  13. codec => "plain"
  14. }
  15. }
  16. ?
  17. filter {
  18. grok{
  19. match => {"message" => "%{TIMESTAMP_ISO8601:logtime} %{NUMBER:pid} %{WORD:level} (?<dbname>\S*) %{USERNAME:modul}: %{GREEDYDATA:msgbody}"}
  20. }
  21. date {
  22. match => [ "logtime", "YYYY-MM-dd HH:mm:ss,SSS" ]
  23. target => "@timestamp"
  24. }
  25. }
  26. ?
  27. output
  28. {
  29. elasticsearch {
  30. hosts => "elasticsearch:9200"
  31. index => "ichub_prod-%{+YYYY.MM.dd}"
  32. }
  33. }

其實主要的工作就在這里,分為三塊。input配置數據來源,filter過濾數據,output將數據傳輸給Elasticsearch。

input

這里用到了logstash的rabbitmq的輸入插件,其詳細的配置文件在這里

值得注意的是,這個插件默認的解碼器用的是JSON,因為我們的日志是一行一行的文本,字段之間用空格分割,所以這里配置的codec是plain。

另外,這里明確的指定了rabbitmq的exchange、exchange_type、key,這樣logstash啟動的時候會連接到RabbitMQ,并自動創建交換器及隊列。

filter

這里用到了grok來解析日志。因為我們的時間格式不是默認支持的,所以用到了data插件來專門解析時間,并將解析的時間覆蓋到@timestamp字段,作為Elasticsearch的時間索引字段。

由于我們的日志中數據庫名這個字段,可能是一個問號“?”,所以不能用默認的WORD模式,這里我用了正則表達式來匹配數據庫名。

這里需要解釋下日志解析的模式:%{TIMESTAMP_ISO8601:timestap} %{NUMBER:pid} %{WORD:level} (?<dbname>\S*) %{USERNAME:modul}: %{GREEDYDATA:msgbody}

TIMESTAMP_ISO8601:匹配時間

NUMBER:匹配數字

WORD:匹配單詞

(?<dbname>\S*):匹配一個不包含非空字符的字符串,本場景主要匹配:ic_new,?這兩種情況。

GREEDYDATA:匹配剩下的所有內容

USERNAME:匹配由字母、數字、句點、下劃線和橫杠組成的字符串。

更多Grok匹配模式

在用到grok解析日志時,可以使用Kibana自帶的grok調試工具

也可以用在線的Grok調試工具

output

這里配置為輸出到Elasticsearch服務器,并指定了index的名字,固定字符串開頭,根據日期每天創建一個索引。

注意,如果遇到問題,可以將數據輸出到控制臺,方便定位問題:

  1. output
  2. {
  3. stdout {
  4. codec => dots
  5. }
  6. elasticsearch {
  7. hosts => "elasticsearch:9200"
  8. index => "ichub-%{+YYYY.MM.dd}"
  9. }
  10. }

這樣我們配置了2個輸出,一個輸出到控制臺,一個輸出到Elasticsearch。控制臺會有如下信息:

4.創建并啟動容器

  1. # docker-compose up -d

調試的時候,可以連接到logstash容器去查看實時的日志:

  1. # docker logs -f logstash

5.在Kibana中配置索引,查詢日志。

打開索引創建頁面,如果logstash已經開始傳輸數據,就能看到我們在配置文件中指定的索引了。

配置好索引后,即可在Discover頁面查詢到日志數據。

?

相關參考:

[運維]ELK實現日志監控告警

https://blog.csdn.net/yeweiouyang/article/details/54948846

啟發:ElastAlert配置為必要時刻執行一段python腳本,用于發送告警短信及郵件即可

版权声明:本站所有资料均为网友推荐收集整理而来,仅供学习和研究交流使用。

原文链接:https://hbdhgg.com/2/173367.html

发表评论:

本站为非赢利网站,部分文章来源或改编自互联网及其他公众平台,主要目的在于分享信息,版权归原作者所有,内容仅供读者参考,如有侵权请联系我们删除!

Copyright © 2022 匯編語言學習筆記 Inc. 保留所有权利。

底部版权信息