做短视频网站好,我爱777在线观看,做网站哪里最便宜,公众号注册入口官网Springboot集成sysloglogstash收集日志到ES
1、背景
Logstash 是一个实时数据收集引擎#xff0c;可收集各类型数据并对其进行分析#xff0c;过滤和归纳。按照自己条件分析过滤出符合的数据#xff0c;导入到可视化界面。它可以实现多样化的数据源数据全量或增量传输logstash收集日志到ES
1、背景
Logstash 是一个实时数据收集引擎可收集各类型数据并对其进行分析过滤和归纳。按照自己条件分析过滤出符合的数据导入到可视化界面。它可以实现多样化的数据源数据全量或增量传输数据标准格式处理数据格式化输出等的功能常用于日志处理。工作流程分为三个阶段
input数据输入阶段可接收oracle、mysql、postgresql、file等多种数据源filter数据标准格式化阶段可过滤、格式化数据如格式化时间、字符串等output数据输出阶段可输出到elasticsearch、mongodb、kafka等接收终端。
架构原理springboot发出syslog日志通过系统的rsyslog服务进行数据转发logstash监听rsyslog端口过滤数据并发到es进行存储
2、springboot集成syslog
maven依赖
dependencygroupIdorg.slf4j/groupIdartifactIdslf4j-api/artifactIdversion1.7.7/version
/dependency
dependencygroupIdch.qos.logback/groupIdartifactIdlogback-core/artifactIdversion1.1.7/version
/dependency
dependencygroupIdch.qos.logback/groupIdartifactIdlogback-classic/artifactIdversion1.1.7/version
/dependencylogback.xml文件配置
配置好日志之后在root标签中添加appender才能生效
?xml version1.0 encodingUTF-8?
configuration debugfalse!-- 控制台输出 --appender nameconsoleLogAppender classch.qos.logback.core.ConsoleAppenderfilter classch.qos.logback.classic.filter.ThresholdFilterlevelINFO/level/filterencoderpattern%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)/pattern/encoder/appenderappender nameinfoFileAppender classch.qos.logback.core.rolling.RollingFileAppenderFile./logs/service.log/Filefilter classch.qos.logback.classic.filter.LevelFilterlevelINFO/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filterencoderpattern%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)/pattern/encoderrollingPolicy classch.qos.logback.core.rolling.TimeBasedRollingPolicyfileNamePattern./logs/service-log-%d{yyyy-MM-dd}.log/fileNamePatternmaxHistory15/maxHistorytotalSizeCap5GB/totalSizeCap/rollingPolicy/appenderappender nameerrorFileAppender classch.qos.logback.core.rolling.RollingFileAppenderFile./logs/service-error.log/Filefilter classch.qos.logback.classic.filter.LevelFilterlevelERROR/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filterencoderpattern%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)/pattern/encoderrollingPolicy classch.qos.logback.core.rolling.TimeBasedRollingPolicyfileNamePattern./logs/service-error.log.%d{yyyy-MM-dd}.log/fileNamePatternmaxHistory15/maxHistorytotalSizeCap5GB/totalSizeCap/rollingPolicy/appenderappender namemsgAppender classch.qos.logback.core.rolling.RollingFileAppenderFile./logs/service-msg.log/Filefilter classch.qos.logback.classic.filter.LevelFilterlevelINFO/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filterencoderpattern%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)/pattern/encoderrollingPolicy classch.qos.logback.core.rolling.TimeBasedRollingPolicyfileNamePattern./logs/service-msg-%d{yyyy-MM-dd}.log/fileNamePatternmaxHistory5/maxHistorytotalSizeCap5GB/totalSizeCap/rollingPolicy/appenderappender nametaskAppender classch.qos.logback.core.rolling.RollingFileAppenderFile./logs/service-task.log/Filefilter classch.qos.logback.classic.filter.LevelFilterlevelINFO/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filterencoderpattern%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)/pattern/encoderrollingPolicy classch.qos.logback.core.rolling.TimeBasedRollingPolicyfileNamePattern./logs/service-task-%d{yyyy-MM-dd}.log/fileNamePatternmaxHistory5/maxHistorytotalSizeCap5GB/totalSizeCap/rollingPolicy/appenderappender namemybatisplus classch.qos.logback.core.rolling.RollingFileAppenderFile./logs/service-mybatisplus.log/Filefilter classch.qos.logback.classic.filter.LevelFilterlevelDEBUG/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filterencoderpattern%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{20} - %cyan(%.-3072msg %n)/pattern/encoderrollingPolicy classch.qos.logback.core.rolling.TimeBasedRollingPolicyfileNamePattern./logs/service-mybatisplus-%d{yyyy-MM-dd}.log/fileNamePatternmaxHistory5/maxHistorytotalSizeCap5GB/totalSizeCap/rollingPolicy/appender!-- 定义一个 SyslogAppender --appender nameSYSLOG classch.qos.logback.classic.net.SyslogAppendersyslogHostlocalhost/syslogHostport12525/portfacilityLOCAL0/facility !-- 设置 Syslog 设施这意味着服务发送到 Syslog 服务器的所有日志都将被标记为来源于 LOCAL0 --filter classch.qos.logback.classic.filter.LevelFilterlevelWARN/levelonMatchACCEPT/onMatchonMismatchDENY/onMismatch/filtersuffixPattern[%d{yyyy-MM-dd HH:mm:ss.SSS}] - [%p] - [%X{app:-${app}}] - [%thread] - [%logger{36}.%M] - %msg%n/suffixPattern/appenderlogger namemsgLogger levelinfo additivityfalseappender-ref refmsgAppender //loggerlogger nametaskLogger levelinfo additivityfalseappender-ref reftaskAppender //logger!-- logger namecom.zbnsec.opera.project.simulator.framework.task levelDEBUGappender-ref refmybatisplus //logger--root levelINFO additivityfalseappender-ref refconsoleLogAppender/appender-ref refinfoFileAppender/appender-ref referrorFileAppender/appender-ref refSYSLOG//root
/configurationSyslogAppender是syslog的配置 syslogHost指的是syslog服务器的主机名/IP地址 portsyslog服务器的监听端口默认为514 udp facility标识消息的来源 suffixPattern描述日志的格式
3、rsyslog接收springboot应用的日志
1、服务器安装rsyslog服务
apt install rsyslog 安装
systemctl start rsyslog 启动服务
systemctl status rsyslog 查看服务状态
systemctl enable rsyslog 设置rsyslog服务器在启动时自动运行2、配置rsyslog.conf
rsyslog的配置文件位于/etc/rsyslog.conf
global(workDirectory/var/lib/rsyslog)
module(loadbuiltin:omfile TemplateRSYSLOG_TraditionalFileFormat)
include(file/etc/rsyslog.d/*.conf modeoptional)*.* localhost:12515*.info;mail.none;authpriv.none;cron.none /var/log/messages
authpriv.* /var/log/secure
mail.* -/var/log/maillog
cron.* /var/log/cron
*.emerg :omusrmsg:*
uucp,news.crit /var/log/spooler
local7.* /var/log/boot.log以上配置转发了12525端口的syslog代表udp 如果此时需要系统日志则需要以下配置tail -500f /var/log/messages 则会看到系统日志一直在刷新保存
module(loadimuxsock SysSock.Useoff)
module(loadimjournal StateFileimjournal.state)
module(loadimklog)
module(loadimmark)
$imjournalRatelimitInterval 0如果需要将sprigboot日志同时也存储在messages文件则需要以下配置 注意这里监听12525端口则在logstash启动时同时监听12525会出现端口占用则logstash不会接收到springboot日志数据
# 监听 UDP 端口
module(loadimudp)
input(typeimudp port12525)# 监听 TCP 端口
module(loadimtcp)
input(typeimtcp port12525)修改完配置之后执行 systemctl restart rsyslog 重新启动服务
4、集成logstash
1、拉取logstash镜像
logstash的版本要和ES的版本一致否则可能出现其他问题
docker pull docker.elastic.co/logstash/logstash:7.4.02、配置logstash
除了以下配置其他的都使用logstash容器中的默认配置可以起一个空的容器把这些默认配置(config目录和pipeline目录)复制出来 logstash.yaml
config.reload.automatic: true
config.reload.interval: 3s
http.host: 0.0.0.0
path.logs: /usr/share/logstash/logs/logstash.conf
status error
name LogstashPropertiesConfig
appender.console.type Console
appender.console.name plain_console
appender.console.layout.type PatternLayout
appender.console.layout.pattern [%d{ISO8601}][%-5p][%-25c] %m%n
appender.json_console.type Console
appender.json_console.name json_console
appender.json_console.layout.type JSONLayout
appender.json_console.layout.compact true
appender.json_console.layout.eventEol true
# Define Rolling File Appender
appender.rolling.type RollingFile
appender.rolling.name rolling
appender.rolling.fileName ${sys:ls.logs}/logstash-plain.log
appender.rolling.filePattern ${sys:ls.logs}/logstash-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.layout.type PatternLayout
appender.rolling.layout.pattern [%d{ISO8601}][%-5p][%-25c] %m%n
appender.rolling.policies.type Policies
appender.rolling.policies.size.type SizeBasedTriggeringPolicy
appender.rolling.policies.size.size 100MB
appender.rolling.strategy.type DefaultRolloverStrategy
appender.rolling.strategy.max 20
rootLogger.level ${sys:ls.log.level}
rootLogger.appenderRef.console.ref ${sys:ls.log.format}_console
rootLogger.appenderRef.rolling.ref rollingpipelines.yml: 在pipeline目录中配置几个管道则在这里对应配置
- pipeline.id: system-syslogpath.config: /usr/share/logstash/pipeline/fscr-syslog.conffscr-syslog.conf
input {syslog {port 12525type system-syslog}
}
filter {if [type] system-syslog {mutate {# Remove ANSI escape sequencesgsub [message, \e\[\d(;\d)*m, ]}if [message] ~ /^\[/ {dissect {mapping {message [%{timestamp}] - [%{loglevel}] - [%{app}] - [%{thread_info}] - [%{source_class}] - %{log_message}}}}mutate {# Convert WARN to WARNINGgsub [loglevel, ^WARN$, WARNING]add_field [ received_at, %{timestamp} ]add_field [ received_from, %{host} ]add_field [ syslog_hostname, %{logsource} ]add_field [ syslog_severity, %{loglevel} ]add_field [ syslog_program, %{app} ]add_field [ syslog_message, %{message} ]add_field [ syslog_timestamp, %{timestamp} ]remove_field [severity_label, facility_label, facility, priority]}date {match [adjusted_received_at, ISO8601]timezone Asia/Shanghaitarget timestamp}}
}output {if [loglevel] WARNING or [loglevel] ERROR {elasticsearch {hosts [http://esHost:9200]index logstash-%{YYYY.MM.dd}template_name logstash # 指定模板(该模板已经存在于es中)template_overwrite false}}if [loglevel] WARNING or [loglevel] ERROR {stdout {codec rubydebug}}
}logstash.json索引文件
{name: logstash,order: 0,version: 60001,index_patterns: [logstash-*],settings: {index: {number_of_shards: 1,refresh_interval: 5s}},mappings: {dynamic_templates: [{message_field: {path_match: message,mapping: {norms: false,type: text},match_mapping_type: string}},{string_fields: {mapping: {norms: false,type: text,fields: {keyword: {ignore_above: 256,type: keyword}}},match_mapping_type: string,match: *}}],properties: {timestamp: {type: date},geoip: {dynamic: true,properties: {ip: {type: ip},latitude: {type: half_float},location: {type: geo_point},longitude: {type: half_float}}},version: {type: keyword}}},aliases: {}
}启动容器
docker run --name logstash -itd --nethost \-v /opt/fscr/middleware/logstash/logstash/config:/usr/share/logstash/config \-v /opt/fscr/middleware/logstash/logstash/pipeline:/usr/share/logstash/pipeline \-p 5044:5044 -p 9600:9600 \logstash:8.8.0容器启动后无error日志可以看到打印的日志信息为正常启动