当前位置: 首页 > news >正文

关于做情侣的网站的图片天津网站建设定制公司

关于做情侣的网站的图片,天津网站建设定制公司,建筑网上招工平台哪个好,手机价格大全因项目需要在鲲鹏麒麟服务器上安装Kafka v1.1.1#xff0c;因此这里将安装配置过程记录下来。 环境说明 # 查看系统相关详细信息 [roottest kafka_2.12-1.1.1]# uname -a Linux test.novalocal 4.19.148 #1 SMP Mon Oct 5 22:04:46 EDT 2020 aarch64 aarch64 aarch64 GNU/Li…因项目需要在鲲鹏麒麟服务器上安装Kafka v1.1.1因此这里将安装配置过程记录下来。 环境说明 # 查看系统相关详细信息 [roottest kafka_2.12-1.1.1]# uname -a Linux test.novalocal 4.19.148 #1 SMP Mon Oct 5 22:04:46 EDT 2020 aarch64 aarch64 aarch64 GNU/Linux # 查看操作系统版本信息 [roottest kafka_2.12-1.1.1]# cat /etc/kylin-release Kylin Linux Advanced Server release V10 (Tercel) # 查看逻辑CPU数量 [roottest kafka_2.12-1.1.1]# cat /proc/cpuinfo| grep processor| wc -l 32 # 查看CPU信息 [roottest kafka_2.12-1.1.1]# lscpu Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 1 Core(s) per socket: 16 Socket(s): 2 NUMA node(s): 2 Vendor ID: HiSilicon Model: 0 Model name: Kunpeng-920 Stepping: 0x1 CPU max MHz: 2400.0000 CPU min MHz: 2400.0000 BogoMIPS: 200.00 L1d cache: 2 MiB L1i cache: 2 MiB L2 cache: 16 MiB L3 cache: 64 MiB NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Not affected Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm # 查看Java的版本信息 [roottest kafka_2.12-1.1.1]# java -version openjdk version 1.8.0_242 OpenJDK Runtime Environment (build 1.8.0_242-b08) OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)下载 从apache官网下载相应的版本信息地址https://kafka.apache.org/downloads找到1.1.1版本下载即可这里下载kafka_2.12-1.1.1.tgz下载地址https://archive.apache.org/dist/kafka/1.1.1/kafka_2.12-1.1.1.tgz 这里的版本号有两段前一段为Scala的版本号后一段为Kafka的版本现在最新版本已经到3.X这里用1.1.1主要是项目特殊需要建议还是用高版本的。 Scala 2.11 和 Scala 2.12 是 Scala 编程语言的两个主要版本它们之间存在一些关键的区别 1. 性能优化Scala 2.12 在性能方面做了很多优化特别是在 JVM 上。它引入了值类Value Classes的改进减少了运行时的开销。Scala 2.11 的性能相对较低但它的优化主要集中在稳定性和兼容性上。 2. 字符串插值Scala 2.12 引入了新的字符串插值语法使用 s 前缀例如 sHello, $name。Scala 2.11 使用的是旧的字符串插值语法使用 #{}例如 Hello, #{name}。 3. 隐式转换Scala 2.12 对隐式转换进行了一些改进使其更加安全和易于理解。Scala 2.11 的隐式转换机制相对较为复杂容易引起混淆。 4. 模块化Scala 2.12 引入了模块化系统允许开发者将代码分割成多个模块便于管理和维护。Scala 2.11 没有模块化系统所有代码都放在一个项目中。 5. 兼容性Scala 2.12 相对于 Scala 2.11 有一些不兼容的更改特别是在 API 和库的使用上。因此从 Scala 2.11 迁移到 Scala 2.12 可能需要一些工作。Scala 2.11 是一个长期支持LTS版本这意味着它将获得更长时间的支持和维护。 6. 社区支持Scala 2.12 是目前的主流版本得到了广泛的社区支持和库的更新。Scala 2.11 虽然仍然在使用但社区支持逐渐减少。 总结如果你正在开发一个新的项目建议使用 Scala 2.12 或更高版本以获得更好的性能和更多的功能。如果你正在维护一个现有的 Scala 2.11 项目可以考虑在适当的时候迁移到 Scala 2.12但需要注意兼容性问题。部署 1、解压文件执行 tar -zxvf kafka_2.12-1.1.1.tgz 2、解压后路径为 [roottest kafka_2.12-1.1.1]# pwd /data/public/kafka/kafka_2.12-1.1.1 3、修改config/server.properties注意修改listeners和advertised.listeners即可 # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the License); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an AS IS BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker. broker.id0############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners listener_name://host_name:port # EXAMPLE: # listeners PLAINTEXT://your.host.name:9092 listenersPLAINTEXT://192.138.31.100:9092# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for listeners if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). advertised.listenersPLAINTEXT://192.138.31.100:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.mapPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network num.network.threads3# The number of threads that the server uses for processing requests, which may include disk I/O num.io.threads8# The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes102400# The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes102400# The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes409600000############################# Log Basics ############################## A comma separated list of directories under which to store log files log.dirs/tmp/kafka-logs# The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir1############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics __consumer_offsets and __transaction_state # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor1 transaction.state.log.replication.factor1 transaction.state.log.min.isr1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages10000# The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log.# The minimum age of a log file to be eligible for deletion due to age log.retention.hours168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes1073741824# The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002. # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connectlocalhost:2181# Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms0启动 1、先启动Zookeeper执行 nohup /data/public/kafka/kafka_2.12-1.1.1/bin/zookeeper-server-start.sh /data/public/kafka/kafka_2.12-1.1.1/config/zookeeper.properties /dev/null 21 2、然后启动Kafka执行 nohup /data/public/kafka/kafka_2.12-1.1.1/bin/kafka-server-start.sh /data/public/kafka/kafka_2.12-1.1.1/config/server.properties /dev/null 21 3、检查是否启动 [roottest kafka_2.12-1.1.1]# netstat -nltp | grep -E (2181|9092) tcp6 0 0 192.168.31.100:9092 :::* LISTEN 970022/java tcp6 0 0 :::2181 :::* LISTEN 966577/java表明Kafka已经启动起来其中2181为Zookeeper的端口9092为Kafka的端口 开启客户端访问 firewall-cmd --zonepublic --add-port2181/tcp --permanent firewall-cmd --zonepublic --add-port9092/tcp --permanent firewall-cmd --reload 使用Offset Explorer 2.0访问 配置完成后连接服务器这时就可以使用Kafka进行生产和消费消息了。
http://www.w-s-a.com/news/897937/

相关文章:

  • 网站建设运营的灵魂是什么意思页面跳转中
  • 家政服务网站源码重庆建网站企业有哪些
  • 怎样分析一个网站做的好坏重庆长寿网站设计公司哪家专业
  • 百度助手app下载苏州seo关键词优化排名
  • 17网站一起做 佛山诸城网站建设多少钱
  • 郑州网站建设培训学校泉州做网站设计公司
  • 西峡做网站深圳建筑工务署官网
  • 单县网站惠州seo计费
  • 万网网站建设 优帮云怎样用记事本做网站
  • 注册域名后网站建设百度指数的功能
  • 怎么做伪静态网站山西网站建设设计
  • 做小型企业网站多少钱衡阳市建设局网站
  • 金华专业网站建设公司网站建设空间和服务器方式
  • 自己做的网站在浏览器上显示不安全吗wordpress revolution slider
  • 西安网站建设推广优化搜索引擎营销
  • 互联网站备案管理工作方案 工信部注册深圳公司需要什么条件
  • 网站网站服务器网站建设 物流
  • 国外开发网站手机网站建设制作
  • 怎么把自己做的网站传网上青岛工程建设监理公司网站
  • 网站301跳转效果商丘网站公司
  • 公司网站建设西安网站的架构与建设
  • 食品科技学校网站模板花溪村镇建设银行网站
  • 图片渐隐 网站头部flash地方志网站建设自查报告
  • 深圳做商城网站视觉品牌网站建设
  • 永康电子商务网站建设弹幕网站怎么做
  • 百川网站企业做网站要注意哪些
  • 球迷类的网站如何做网站建设需要哪些素材
  • 请问有重庆有做网站吗电子政务系统网站建设的基本过程
  • 建设银行管方网站官网最新版cmsv6
  • 网站开发工程师需要会写什么深圳网站(建设信科网络)