当前位置: 首页 > news >正文

云南网络营销文化优化优化绿松石什么意思

云南网络营销文化优化,优化绿松石什么意思,成都都网站建设,东莞网站优化ELK日志收集系统集群实验 实验环境 角色主机名IP接口httpd192.168.31.50ens33node1192.168.31.51ens33noed2192.168.31.53ens33 环境配置 设置各个主机的ip地址为拓扑中的静态ip#xff0c;并修改主机名 #httpd [rootlocalhost ~]# hostnamectl set-hostname httpd [root… ELK日志收集系统集群实验 实验环境 角色主机名IP接口httpd192.168.31.50ens33node1192.168.31.51ens33noed2192.168.31.53ens33 环境配置 设置各个主机的ip地址为拓扑中的静态ip并修改主机名 #httpd [rootlocalhost ~]# hostnamectl set-hostname httpd [rootlocalhost ~]# bash [roothttpd ~]# #node1 [rootlocalhost ~]# hostnamectl set-hostname node1 [rootlocalhost ~]# bash [rootnode1 ~]# vim /etc/hosts 192.168.31.51 node1 192.168.31.53 node2#node2 [rootlocalhost ~]# hostnamectl set-hostname node2 [rootlocalhost ~]# bash [rootnode2 ~]# vim /etc/hosts 192.168.31.51 node1 192.168.31.53 node2 安装elasticsearch #node1 [rootnode1 ~]# ls elk软件包 公共 模板 视频 图片 文档 下载 音乐 桌面 [rootnode1 ~]# mv elk软件包 elk [rootnode1 ~]# ls elk 公共 模板 视频 图片 文档 下载 音乐 桌面 [rootnode1 ~]# cd elk [rootnode1 elk]# ls elasticsearch-5.5.0.rpm kibana-5.5.1-x86_64.rpm node-v8.2.1.tar.gz elasticsearch-head.tar.gz logstash-5.5.1.rpm phantomjs-2.1.1-linux-x86_64.tar.bz2 [rootnode1 elk]# rpm -ivh elasticsearch-5.5.0.rpm 警告elasticsearch-5.5.0.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] Creating elasticsearch group... OK Creating elasticsearch user... OK 正在升级/安装...1:elasticsearch-0:5.5.0-1 ################################# [100%] ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemdsudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service ### You can start elasticsearch service by executingsudo systemctl start elasticsearch.service#node2配置 node1 vim /etc/elasticsearch/elasticsearch.yml 17 cluster.name: my-elk-cluster //集群名称 23 node.name: node1 //节点名字33 path.data: /var/lib/elasticsearch //数据存放路径37 path.logs: /var/log/elasticsearch/ //日志存放路径43 bootstrap.memory_lock: false //在启动的时候不锁定内存55 network.host: 0.0.0.0 //提供服务绑定的IP地址0.0.0.0代表所有地址59 http.port: 9200 //侦听端口为920068 discovery.zen.ping.unicast.hosts: [node1, node2] //群集发现通过单播实现node2 17 cluster.name: my-elk-cluster //集群名称 23 node.name: node1 //节点名字33 path.data: /var/lib/elasticsearch //数据存放路径37 path.logs: /var/log/elasticsearch/ //日志存放路径43 bootstrap.memory_lock: false //在启动的时候不锁定内存55 network.host: 0.0.0.0 //提供服务绑定的IP地址0.0.0.0代表所有地址59 http.port: 9200 //侦听端口为920068 discovery.zen.ping.unicast.hosts: [node1, node2] //群集发现通过单播实现在node1安装-elasticsearch-head插件 移动到elk文件夹 #安装插件编译很慢 [rootnode1 ~]# cd elk/ [rootnode1 elk]# ls elasticsearch-5.5.0.rpm kibana-5.5.1-x86_64.rpm phantomjs-2.1.1-linux-x86_64.tar.bz2 elasticsearch-head.tar.gz logstash-5.5.1.rpm node-v8.2.1.tar.gz [rootnode1 elk]# tar xf node-v8.2.1.tar.gz [rootnode1 elk]# cd node-v8.2.1/ [rootnode1 node-v8.2.1]# ./configure make make install [rootnode1 elk]# cd ~/elk [rootnode1 elk]# tar xf phantomjs-2.1.1-linux-x86_64.tar.bz2 [rootnode1 elk]# cd phantomjs-2.1.1-linux-x86_64/bin/ [rootnode1 bin]# ls phantomjs [rootnode1 bin]# cp phantomjs /usr/local/bin/ [rootnode1 bin]# cd ~/elk/ [rootnode1 elk]# tar xf elasticsearch-head.tar.gz [rootnode1 elk]# cd elasticsearch-head/ [rootnode1 elasticsearch-head]# npm install npm WARN deprecated fsevents1.2.13: The v1 package contains DANGEROUS / INSECURE binaries. Upgrade to safe fsevents v2 npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents^1.0.0 (node_modules/karma/node_modules/chokidar/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents1.2.13: wanted {os:darwin,arch:any} (current: {os:linux,arch:x64}) npm WARN elasticsearch-head0.0.0 license should be a valid SPDX license expressionup to date in 3.536s修改elasticsearch配置文件 [rootnode1 ~]# vim /etc/elasticsearch/elasticsearch.yml 84 # ---------------------------------- Various -----------------------------------85 #86 # Require explicit names when deleting indices:87 #88 #action.destructive_requires_name: true89 http.cors.enabled: true //开启跨域访问支持默认为false90 http.cors.allow-origin:* //跨域访问允许的域名地址 [rootnode1 ~]# systemctl restart elasticsearch.service #启动elasticsearch-head cd /root/elk/elasticsearch-head npm run start #查看监听 netstat -anput | grep :9100 #访问 http://192.168.31.51:9100 node1服务器安装logstash [rootnode1 elk]# rpm -ivh logstash-5.5.1.rpm 警告logstash-5.5.1.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%]软件包 logstash-1:5.5.1-1.noarch 已经安装 #开启并创建一个软连接 [rootnode1 elk]# systemctl start logstash.service [rootnode1 elk]# In -s /usr/share/logstash/bin/logstash /usr/local/bin/ #测试1 [rootnode1 elk]# logstash -e input{ stdin{} }output { stdout{} } ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 16:03:50.250 [main] INFO logstash.setting.writabledirectory - Creating directory {:settingpath.queue, :path/usr/share/logstash/data/queue} 16:03:50.256 [main] INFO logstash.setting.writabledirectory - Creating directory {:settingpath.dead_letter_queue, :path/usr/share/logstash/data/dead_letter_queue} 16:03:50.330 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid9ba08544-a7a7-4706-a3cd-2e2ca163548d, :path/usr/share/logstash/data/uuid} 16:03:50.584 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers2, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight250} 16:03:50.739 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started The stdin plugin is now waiting for input: 16:03:50.893 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} ^C16:04:32.838 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting down the agent. 16:04:32.855 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain} #测试2 [rootnode1 elk]# logstash -e input { stdin{} } output { stdout{ codecrubydebug }} ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 16:46:23.975 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers2, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight250} The stdin plugin is now waiting for input: 16:46:24.014 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started 16:46:24.081 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} ^C16:46:29.970 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting down the agent. 16:46:29.975 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain} #测试3 16:46:29.975 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain} [rootnode1 elk]# logstash -e input { stdin{} } output { elasticsearch{ hosts[192.168.31.51:9200]} } ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 16:46:55.951 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes{:removed[], :added[http://192.168.31.51:9200/]}} 16:46:55.955 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_urlhttp://192.168.31.51:9200/, :path/} 16:46:56.049 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url#Java::JavaNet::URI:0x3a106333} 16:46:56.068 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:pathnil} 16:46:56.204 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template{templatelogstash-*, version50001, settings{index.refresh_interval5s}, mappings{_default_{_all{enabledtrue, normsfalse}, dynamic_templates[{message_field{path_matchmessage, match_mapping_typestring, mapping{typetext, normsfalse}}}, {string_fields{match*, match_mapping_typestring, mapping{typetext, normsfalse, fields{keyword{typekeyword, ignore_above256}}}}}], properties{timestamp{typedate, include_in_allfalse}, version{typekeyword, include_in_allfalse}, geoip{dynamictrue, properties{ip{typeip}, location{typegeo_point}, latitude{typehalf_float}, longitude{typehalf_float}}}}}}}} 16:46:56.233 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Installing elasticsearch template to _template/logstash 16:46:56.429 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:classLogStash::Outputs::ElasticSearch, :hosts[#Java::JavaNet::URI:0x19aeba5c]} 16:46:56.432 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers2, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight250} 16:46:56.461 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started The stdin plugin is now waiting for input: 16:46:56.561 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} ^C16:46:57.638 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting down the agent. 16:46:57.658 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain}logstash日志收集文件格式默认存储在/etc/logstash/conf.d Logstash配置文件基本由三部分组成input、output以及 filter根据需要。标准的配置文件格式如下 input (...)  输入 filter {...}   过滤 output {...}  输出 在每个部分中也可以指定多个访问方式。例如若要指定两个日志来源文件则格式如下 input { file{path /var/log/messages type syslog} file { path /var/log/apache/access.log  type apache} } 案例通过logstash收集系统信息日志 [rootnode1 conf.d]# chmod or /var/log/messages [rootnode1 conf.d]# vim /etc/logstash/conf.d/system.conf input { file{ path /var/log/messages type system start_position beginning } } output { elasticsearch{ hosts [192.168.31.51:9200] index system-%{YYYY.MM.dd} } } [rootnode1 conf.d]# systemctl restart logstash.service node1节点安装kibana cd ~/elk [rootnode1 elk]# rpm -ivh kibana-5.5.1-x86_64.rpm 警告kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] 正在升级/安装...1:kibana-5.5.1-1 ################################# [100%] [rootnode1 elk]# vim /etc/kibana/kibana.yml 2 server.port: 5601 //Kibana打开的端口7 server.host: 0.0.0.0 //Kibana侦听的地址21 elasticsearch.url: http://192.168.31.51:9200 //和Elasticsearch 建立连接30 kibana.index: .kibana //在Elasticsearch中添加.kibana索引 [rootnode1 elk]# systemctl start kibana.service 访问kibana 首次访问需要添加索引我们添加前面已经添加过的索引system-* 企业案例 收集nginx访问日志信息 在httpd服务器上安装logstash参数上述安装过程 logstash在httpd服务器上作为agent代理不需要启动 编写httpd日志收集配置文件 [roothttpd ]# yum install -y httpd [roothttpd ]# systemctl start httpd [roothttpd ]# systemctl start logstash [roothttpd ]# vim /etc/logstash/conf.d/httpd.conf input {file {path /var/log/httpd/access_logtype accessstart_position beginning} } output {elasticsearch {hosts [192.168.31.51:9200]index httpd-%{YYYY.MM.dd}} } [roothttpd ]# logstash -f /etc/logstash/conf.d/httpd.conf OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreadsN ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 21:29:34.272 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes{:removed[], :added[http://192.168.31.51:9200/]}} 21:29:34.275 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_urlhttp://192.168.31.51:9200/, :path/} 21:29:34.400 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url#Java::JavaNet::URI:0x1c254b0a} 21:29:34.423 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:pathnil} 21:29:34.579 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template{templatelogstash-*, version50001, settings{index.refresh_interval5s}, mappings{_default_{_all{enabledtrue, normsfalse}, dynamic_templates[{message_field{path_matchmessage, match_mapping_typestring, mapping{typetext, normsfalse}}}, {string_fields{match*, match_mapping_typestring, mapping{typetext, normsfalse, fields{keyword{typekeyword, ignore_above256}}}}}], properties{timestamp{typedate, include_in_allfalse}, version{typekeyword, include_in_allfalse}, geoip{dynamictrue, properties{ip{typeip}, location{typegeo_point}, latitude{typehalf_float}, longitude{typehalf_float}}}}}}}} 21:29:34.585 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:classLogStash::Outputs::ElasticSearch, :hosts[#Java::JavaNet::URI:0x3b483278]} 21:29:34.588 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers1, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight125} 21:29:34.845 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started 21:29:34.921 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} wwwww w w
http://www.w-s-a.com/news/70787/

相关文章:

  • 云端网站建设php7 wordpress速度
  • 建站的公司中小企业网站建设报告
  • 上海高档网站建设网站设计入门
  • 德尔普网站建设做网站线
  • 宁波网站搭建定制非模板网站建设电子商务公司名称大全简单大气
  • 巴中哪里做网站推销网站的方法
  • wordpress建站动画网站宣传的手段有哪些?(写出五种以上)
  • 做么网站有黄医疗机构网站备案
  • 企业年金是1比3还是1比4北京厦门网站优化
  • 政务信息网站建设工作方案云南建设工程质量监督网站
  • 如何做一份企业网站免费的短视频素材库
  • 云脑网络科技网站建设咸阳软件开发
  • seo对网站优化网站更换程序
  • 网站建设放什么科目中小学生在线做试卷的网站6
  • 网站建设推广公司排名绥化建设局网站
  • 凡科做的网站为什么打不开苏州行业网站建设
  • 南昌定制网站开发费用微信小商店官网入口
  • 深圳网站建设费用找人做的网站怎么看ftp
  • 做网站cookie传值dedecms网站后台
  • 温州网站推广网站建设要学会什么
  • c 网站开发框架品牌策划方案范文
  • 儿童摄影作品网站多元网络兰州网站建设
  • 电脑上不了建设厅网站常德网站建设费用
  • 做单页免费模板网站最新办公室装修风格效果图
  • 中国铁路建设投资公司网站熊学军想开网站建设公司
  • 优化一个网站多少钱网站开发北京
  • html教学关键词优化价格
  • 黄冈论坛网站有哪些给wordpress首页添加公告栏
  • 初中做数学题的网站做淘宝必备网站
  • 买拆车件上什么网站谁有那种手机网站