当前位置: 首页 > news >正文

建立企业网站的形式有全屋定制销售技巧

建立企业网站的形式有,全屋定制销售技巧,一个网站里面只放一个图片怎么做的,龙岗网站seoELK日志收集系统集群实验 实验环境 角色主机名IP接口httpd192.168.31.50ens33node1192.168.31.51ens33noed2192.168.31.53ens33 环境配置 设置各个主机的ip地址为拓扑中的静态ip#xff0c;并修改主机名 #httpd [rootlocalhost ~]# hostnamectl set-hostname httpd [root… ELK日志收集系统集群实验 实验环境 角色主机名IP接口httpd192.168.31.50ens33node1192.168.31.51ens33noed2192.168.31.53ens33 环境配置 设置各个主机的ip地址为拓扑中的静态ip并修改主机名 #httpd [rootlocalhost ~]# hostnamectl set-hostname httpd [rootlocalhost ~]# bash [roothttpd ~]# #node1 [rootlocalhost ~]# hostnamectl set-hostname node1 [rootlocalhost ~]# bash [rootnode1 ~]# vim /etc/hosts 192.168.31.51 node1 192.168.31.53 node2#node2 [rootlocalhost ~]# hostnamectl set-hostname node2 [rootlocalhost ~]# bash [rootnode2 ~]# vim /etc/hosts 192.168.31.51 node1 192.168.31.53 node2 安装elasticsearch #node1 [rootnode1 ~]# ls elk软件包 公共 模板 视频 图片 文档 下载 音乐 桌面 [rootnode1 ~]# mv elk软件包 elk [rootnode1 ~]# ls elk 公共 模板 视频 图片 文档 下载 音乐 桌面 [rootnode1 ~]# cd elk [rootnode1 elk]# ls elasticsearch-5.5.0.rpm kibana-5.5.1-x86_64.rpm node-v8.2.1.tar.gz elasticsearch-head.tar.gz logstash-5.5.1.rpm phantomjs-2.1.1-linux-x86_64.tar.bz2 [rootnode1 elk]# rpm -ivh elasticsearch-5.5.0.rpm 警告elasticsearch-5.5.0.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] Creating elasticsearch group... OK Creating elasticsearch user... OK 正在升级/安装...1:elasticsearch-0:5.5.0-1 ################################# [100%] ### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemdsudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service ### You can start elasticsearch service by executingsudo systemctl start elasticsearch.service#node2配置 node1 vim /etc/elasticsearch/elasticsearch.yml 17 cluster.name: my-elk-cluster //集群名称 23 node.name: node1 //节点名字33 path.data: /var/lib/elasticsearch //数据存放路径37 path.logs: /var/log/elasticsearch/ //日志存放路径43 bootstrap.memory_lock: false //在启动的时候不锁定内存55 network.host: 0.0.0.0 //提供服务绑定的IP地址0.0.0.0代表所有地址59 http.port: 9200 //侦听端口为920068 discovery.zen.ping.unicast.hosts: [node1, node2] //群集发现通过单播实现node2 17 cluster.name: my-elk-cluster //集群名称 23 node.name: node1 //节点名字33 path.data: /var/lib/elasticsearch //数据存放路径37 path.logs: /var/log/elasticsearch/ //日志存放路径43 bootstrap.memory_lock: false //在启动的时候不锁定内存55 network.host: 0.0.0.0 //提供服务绑定的IP地址0.0.0.0代表所有地址59 http.port: 9200 //侦听端口为920068 discovery.zen.ping.unicast.hosts: [node1, node2] //群集发现通过单播实现在node1安装-elasticsearch-head插件 移动到elk文件夹 #安装插件编译很慢 [rootnode1 ~]# cd elk/ [rootnode1 elk]# ls elasticsearch-5.5.0.rpm kibana-5.5.1-x86_64.rpm phantomjs-2.1.1-linux-x86_64.tar.bz2 elasticsearch-head.tar.gz logstash-5.5.1.rpm node-v8.2.1.tar.gz [rootnode1 elk]# tar xf node-v8.2.1.tar.gz [rootnode1 elk]# cd node-v8.2.1/ [rootnode1 node-v8.2.1]# ./configure make make install [rootnode1 elk]# cd ~/elk [rootnode1 elk]# tar xf phantomjs-2.1.1-linux-x86_64.tar.bz2 [rootnode1 elk]# cd phantomjs-2.1.1-linux-x86_64/bin/ [rootnode1 bin]# ls phantomjs [rootnode1 bin]# cp phantomjs /usr/local/bin/ [rootnode1 bin]# cd ~/elk/ [rootnode1 elk]# tar xf elasticsearch-head.tar.gz [rootnode1 elk]# cd elasticsearch-head/ [rootnode1 elasticsearch-head]# npm install npm WARN deprecated fsevents1.2.13: The v1 package contains DANGEROUS / INSECURE binaries. Upgrade to safe fsevents v2 npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents^1.0.0 (node_modules/karma/node_modules/chokidar/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents1.2.13: wanted {os:darwin,arch:any} (current: {os:linux,arch:x64}) npm WARN elasticsearch-head0.0.0 license should be a valid SPDX license expressionup to date in 3.536s修改elasticsearch配置文件 [rootnode1 ~]# vim /etc/elasticsearch/elasticsearch.yml 84 # ---------------------------------- Various -----------------------------------85 #86 # Require explicit names when deleting indices:87 #88 #action.destructive_requires_name: true89 http.cors.enabled: true //开启跨域访问支持默认为false90 http.cors.allow-origin:* //跨域访问允许的域名地址 [rootnode1 ~]# systemctl restart elasticsearch.service #启动elasticsearch-head cd /root/elk/elasticsearch-head npm run start #查看监听 netstat -anput | grep :9100 #访问 http://192.168.31.51:9100 node1服务器安装logstash [rootnode1 elk]# rpm -ivh logstash-5.5.1.rpm 警告logstash-5.5.1.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%]软件包 logstash-1:5.5.1-1.noarch 已经安装 #开启并创建一个软连接 [rootnode1 elk]# systemctl start logstash.service [rootnode1 elk]# In -s /usr/share/logstash/bin/logstash /usr/local/bin/ #测试1 [rootnode1 elk]# logstash -e input{ stdin{} }output { stdout{} } ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 16:03:50.250 [main] INFO logstash.setting.writabledirectory - Creating directory {:settingpath.queue, :path/usr/share/logstash/data/queue} 16:03:50.256 [main] INFO logstash.setting.writabledirectory - Creating directory {:settingpath.dead_letter_queue, :path/usr/share/logstash/data/dead_letter_queue} 16:03:50.330 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid9ba08544-a7a7-4706-a3cd-2e2ca163548d, :path/usr/share/logstash/data/uuid} 16:03:50.584 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers2, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight250} 16:03:50.739 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started The stdin plugin is now waiting for input: 16:03:50.893 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} ^C16:04:32.838 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting down the agent. 16:04:32.855 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain} #测试2 [rootnode1 elk]# logstash -e input { stdin{} } output { stdout{ codecrubydebug }} ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 16:46:23.975 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers2, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight250} The stdin plugin is now waiting for input: 16:46:24.014 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started 16:46:24.081 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} ^C16:46:29.970 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting down the agent. 16:46:29.975 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain} #测试3 16:46:29.975 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain} [rootnode1 elk]# logstash -e input { stdin{} } output { elasticsearch{ hosts[192.168.31.51:9200]} } ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 16:46:55.951 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes{:removed[], :added[http://192.168.31.51:9200/]}} 16:46:55.955 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_urlhttp://192.168.31.51:9200/, :path/} 16:46:56.049 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url#Java::JavaNet::URI:0x3a106333} 16:46:56.068 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:pathnil} 16:46:56.204 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template{templatelogstash-*, version50001, settings{index.refresh_interval5s}, mappings{_default_{_all{enabledtrue, normsfalse}, dynamic_templates[{message_field{path_matchmessage, match_mapping_typestring, mapping{typetext, normsfalse}}}, {string_fields{match*, match_mapping_typestring, mapping{typetext, normsfalse, fields{keyword{typekeyword, ignore_above256}}}}}], properties{timestamp{typedate, include_in_allfalse}, version{typekeyword, include_in_allfalse}, geoip{dynamictrue, properties{ip{typeip}, location{typegeo_point}, latitude{typehalf_float}, longitude{typehalf_float}}}}}}}} 16:46:56.233 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Installing elasticsearch template to _template/logstash 16:46:56.429 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:classLogStash::Outputs::ElasticSearch, :hosts[#Java::JavaNet::URI:0x19aeba5c]} 16:46:56.432 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers2, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight250} 16:46:56.461 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started The stdin plugin is now waiting for input: 16:46:56.561 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} ^C16:46:57.638 [SIGINT handler] WARN logstash.runner - SIGINT received. Shutting down the agent. 16:46:57.658 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:idmain}logstash日志收集文件格式默认存储在/etc/logstash/conf.d Logstash配置文件基本由三部分组成input、output以及 filter根据需要。标准的配置文件格式如下 input (...)  输入 filter {...}   过滤 output {...}  输出 在每个部分中也可以指定多个访问方式。例如若要指定两个日志来源文件则格式如下 input { file{path /var/log/messages type syslog} file { path /var/log/apache/access.log  type apache} } 案例通过logstash收集系统信息日志 [rootnode1 conf.d]# chmod or /var/log/messages [rootnode1 conf.d]# vim /etc/logstash/conf.d/system.conf input { file{ path /var/log/messages type system start_position beginning } } output { elasticsearch{ hosts [192.168.31.51:9200] index system-%{YYYY.MM.dd} } } [rootnode1 conf.d]# systemctl restart logstash.service node1节点安装kibana cd ~/elk [rootnode1 elk]# rpm -ivh kibana-5.5.1-x86_64.rpm 警告kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] 正在升级/安装...1:kibana-5.5.1-1 ################################# [100%] [rootnode1 elk]# vim /etc/kibana/kibana.yml 2 server.port: 5601 //Kibana打开的端口7 server.host: 0.0.0.0 //Kibana侦听的地址21 elasticsearch.url: http://192.168.31.51:9200 //和Elasticsearch 建立连接30 kibana.index: .kibana //在Elasticsearch中添加.kibana索引 [rootnode1 elk]# systemctl start kibana.service 访问kibana 首次访问需要添加索引我们添加前面已经添加过的索引system-* 企业案例 收集nginx访问日志信息 在httpd服务器上安装logstash参数上述安装过程 logstash在httpd服务器上作为agent代理不需要启动 编写httpd日志收集配置文件 [roothttpd ]# yum install -y httpd [roothttpd ]# systemctl start httpd [roothttpd ]# systemctl start logstash [roothttpd ]# vim /etc/logstash/conf.d/httpd.conf input {file {path /var/log/httpd/access_logtype accessstart_position beginning} } output {elasticsearch {hosts [192.168.31.51:9200]index httpd-%{YYYY.MM.dd}} } [roothttpd ]# logstash -f /etc/logstash/conf.d/httpd.conf OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreadsN ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console 21:29:34.272 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes{:removed[], :added[http://192.168.31.51:9200/]}} 21:29:34.275 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_urlhttp://192.168.31.51:9200/, :path/} 21:29:34.400 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url#Java::JavaNet::URI:0x1c254b0a} 21:29:34.423 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:pathnil} 21:29:34.579 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template{templatelogstash-*, version50001, settings{index.refresh_interval5s}, mappings{_default_{_all{enabledtrue, normsfalse}, dynamic_templates[{message_field{path_matchmessage, match_mapping_typestring, mapping{typetext, normsfalse}}}, {string_fields{match*, match_mapping_typestring, mapping{typetext, normsfalse, fields{keyword{typekeyword, ignore_above256}}}}}], properties{timestamp{typedate, include_in_allfalse}, version{typekeyword, include_in_allfalse}, geoip{dynamictrue, properties{ip{typeip}, location{typegeo_point}, latitude{typehalf_float}, longitude{typehalf_float}}}}}}}} 21:29:34.585 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:classLogStash::Outputs::ElasticSearch, :hosts[#Java::JavaNet::URI:0x3b483278]} 21:29:34.588 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {idmain, pipeline.workers1, pipeline.batch.size125, pipeline.batch.delay5, pipeline.max_inflight125} 21:29:34.845 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started 21:29:34.921 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port9600} wwwww w w
http://www.w-s-a.com/news/454933/

相关文章:

  • iis添加网站ip地址树莓派运行wordpress
  • 网站空间域名多少钱宿迁做网站公司
  • 福州建设企业网站网站交互主要做什么的
  • 英文网站建设方法门户网站特点
  • 腾讯云备案 网站名称萧山城市建设网站
  • 漳浦网站建设网络营销推广策略
  • 龙岗商城网站建设教程百度关键词排名突然没了
  • 深圳网站建设服务哪家有织梦网站模板安装
  • 网站设计与网页制作代码大全网站开发还找到工作吗
  • 给设计网站做图会字体侵权吗站长工具seo综合查询张家界新娘
  • 网站的建设与颜色搭配win7在iis中新建一个网站
  • 单位做网站有哪些功能型类的网站
  • 网站怎样做优惠卷移动互联网开发培训
  • 重庆网站建设帝维科技网站做定向的作用
  • 网站建设工作室wp主题模板做污事网站
  • 网站建设 深圳 凡科重庆家居网站制作公司
  • 自己也可以免费轻松创建一个网站企业收录网站有什么用
  • 帮别人做网站违法导航网站开发工具
  • seo网站外包公司字画价格网站建设方案
  • 网站国内空间价格销售技巧
  • 广安建设企业网站qq互联网站备案号
  • 京东网站建设的要求vs2010做的网站
  • wordpress 新闻杂志主题佛山企业网站排名优化
  • 选服务好的网站建设金华市开发区人才网
  • 广州建站商城南阳高质量建设大城市网站
  • 网站建设合同封面模板做代炼的网站
  • 外贸网站建站要多少钱南昌优化排名推广
  • 做公司网站的尺寸一般是多大企业管理网站
  • 苏州网站设计公司兴田德润i简介做签证宾馆订单用啥网站
  • 网站页面设计工具做网站租空间