當前位置:首頁 » 編程軟體 » kafka源碼編譯

kafka源碼編譯

發布時間: 2025-09-05 08:30:21

Ⅰ 如何保證kafka 的消息機制 ack-fail 源碼跟蹤

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.(Kafka布式、區(partitioned)、基於備份(replicated)commit-log存儲服務.提供類似於messaging system特性,設計實現完全同)kafka種高吞吐量布式發布訂閱消息系統特性:
(1)、通O(1)磁碟數據結構提供消息持久化種結構於即使數TB消息存儲能夠保持間穩定性能
(2)、高吞吐量:即使非普通硬體kafka支持每秒數十萬消息
(3)、支持通kafka伺服器消費機集群區消息
(4)、支持Hadoop並行數據載入
、用Kafka面自帶腳本進行編譯
載Kafka源碼面自帶gradlew腳本我利用編譯Kafka源碼:
1 # wget
2 # tar -zxf kafka-0.8.1.1-src.tgz
3 # cd kafka-0.8.1.1-src
4 # ./gradlew releaseTarGz
運行面命令進行編譯現異信息:
01 :core:signArchives FAILED
02
03 FAILURE: Build failed with an exception.
04
05 * What went wrong:
06 Execution failed for task ':core:signArchives'.
07 > Cannot perform signing task ':core:signArchives' because it
08 has no configured signatory
09
10 * Try:
11 Run with --stacktrace option to get the stack trace. Run with
12 --info or --debug option to get more log output.
13
14 BUILD FAILED
bug()用面命令進行編譯
1 ./gradlew releaseTarGzAll -x signArchives
候編譯功(編譯程現)編譯程我指定應Scala版本進行編譯:
1 ./gradlew -PscalaVersion=2.10.3 releaseTarGz -x signArchives
編譯完core/build/distributions/面kafka_2.10-0.8.1.1.tgz文件網載直接用
二、利用sbt進行編譯
我同用sbt編譯Kafka步驟:
01 # git clone
02 # cd kafka
03 # git checkout -b 0.8 remotes/origin/0.8
04 # ./sbt update
05 [info] [SUCCESSFUL ] org.eclipse.jdt#core;3.1.1!core.jar (2243ms)
06 [info] downloading ...
07 [info] [SUCCESSFUL ] ant#ant;1.6.5!ant.jar (1150ms)
08 [info] Done updating.
09 [info] Resolving org.apache.hadoop#hadoop-core;0.20.2 ...
10 [info] Done updating.
11 [info] Resolving com.yammer.metrics#metrics-annotation;2.2.0 ...
12 [info] Done updating.
13 [info] Resolving com.yammer.metrics#metrics-annotation;2.2.0 ...
14 [info] Done updating.
15 [success] Total time: 168 s, completed Jun 18, 2014 6:51:38 PM
16
17 # ./sbt package
18 [info] Set current project to Kafka (in build file:/export1/spark/kafka/)
19 Getting Scala 2.8.0 ...
20 :: retrieving :: org.scala-sbt#boot-scala
21 confs: [default]
22 3 artifacts copied, 0 already retrieved (14544kB/27ms)
23 [success] Total time: 1 s, completed Jun 18, 2014 6:52:37 PM
於Kafka 0.8及版本需要運行命令:
01 # ./sbt assembly-package-dependency
02 [info] Loading project definition from /export1/spark/kafka/project
03 [warn] Multiple resolvers having different access mechanism configured with
04 same name 'sbt-plugin-releases'. To avoid conflict, Remove plicate project
05 resolvers (`resolvers`) or rename publishing resolver (`publishTo`).
06 [info] Set current project to Kafka (in build file:/export1/spark/kafka/)
07 [warn] Credentials file /home/wyp/.m2/.credentials does not exist
08 [info] Including slf4j-api-1.7.2.jar
09 [info] Including metrics-annotation-2.2.0.jar
10 [info] Including scala-compiler.jar
11 [info] Including scala-library.jar
12 [info] Including slf4j-simple-1.6.4.jar
13 [info] Including metrics-core-2.2.0.jar
14 [info] Including snappy-java-1.0.4.1.jar
15 [info] Including zookeeper-3.3.4.jar
16 [info] Including log4j-1.2.15.jar
17 [info] Including zkclient-0.3.jar
18 [info] Including jopt-simple-3.2.jar
19 [warn] Merging 'META-INF/NOTICE' with strategy 'rename'
20 [warn] Merging 'org/xerial/snappy/native/README' with strategy 'rename'
21 [warn] Merging 'META-INF/maven/org.xerial.snappy/snappy-java/LICENSE'
22 with strategy 'rename'
23 [warn] Merging 'LICENSE.txt' with strategy 'rename'
24 [warn] Merging 'META-INF/LICENSE' with strategy 'rename'
25 [warn] Merging 'META-INF/MANIFEST.MF' with strategy 'discard'
26 [warn] Strategy 'discard' was applied to a file
27 [warn] Strategy 'rename' was applied to 5 files
28 [success] Total time: 3 s, completed Jun 18, 2014 6:53:41 PM
我sbt面指定scala版本:
01 <!--
02 User: 往記憶
03 Date: 14-6-18
04 Time: 20:20
05 bolg:
06 本文址:/archives/1044
07 往記憶博客專注於hadoop、hive、spark、shark、flume技術博客量干貨
08 往記憶博客微信公共帳號:iteblog_hadoop
09 -->
10 sbt "++2.10.3 update"
11 sbt "++2.10.3 package"
12 sbt "++2.10.3 assembly-package-dependency"

Ⅱ apache atlas獨立部署(hadoop、hive、kafka、hbase、solr、zookeeper)

在CentOS 7虛擬機(IP: 192.168.198.131)上部署Apache Atlas,獨立運行時需要以下步驟:


Apache Atlas 獨立部署(集成Hadoop、Hive、Kafka、HBase、Solr、Zookeeper)


**前提環境**:Java 1.8、Hadoop-2.7.4、JDBC驅動、Zookeeper(用於Atlas的HBase和Solr)


一、Hadoop 安裝

  • 設置主機名為 master

  • 關閉防火牆

  • 設置免密碼登錄

  • 解壓Hadoop-2.7.4

  • 安裝JDK

  • 查看Hadoop版本

  • 配置Hadoop環境

  • 格式化HDFS(確保路徑存在)

  • 設置環境變數

  • 生成SSH密鑰並配置免密碼登錄

  • 啟動Hadoop服務

  • 訪問Hadoop集群


二、Hive 安裝

  • 解壓Hive

  • 配置環境變數

  • 驗證Hive版本

  • 復制MySQL驅動至hive/lib

  • 創建MySQL資料庫並執行命令

  • 執行Hive命令

  • 檢查已創建的資料庫


三、Kafka 偽分布式安裝

  • 安裝並啟動Kafka

  • 測試Kafka(使用kafka-console-procer.sh與kafka-console-consumer.sh)

  • 配置多個Kafka server屬性文件


四、HBase 安裝與配置

  • 解壓HBase

  • 配置環境變數

  • 修改配置文件

  • 啟動HBase

  • 訪問HBase界面

  • 解決配置問題(如JDK版本兼容、ZooKeeper集成)


五、Solr 集群安裝

  • 解壓Solr

  • 啟動並測試Solr

  • 配置ZooKeeper與SOLR_PORT

  • 創建Solr collection


六、Apache Atlas 獨立部署

  • 編譯Apache Atlas源碼,選擇獨立部署版本

  • 不使用內置的HBase和Solr

  • 編譯完成後,使用集成的Solr到Apache Atlas

  • 修改配置文件以指向正確的存儲位置


七、Apache Atlas 獨立部署問題解決

  • 確保HBase配置文件位置正確

  • 解決啟動時的JanusGraph和HBase異常

  • 確保Solr集群配置正確


部署完成後,Apache Atlas將獨立運行,與Hadoop、Hive、Kafka、HBase、Solr和Zookeeper集成,提供數據湖和元數據管理功能。

熱點內容
恩布拉科壓縮機冰箱 發布:2025-09-05 11:04:55 瀏覽:897
如何把安卓手機號碼移到蘋果上 發布:2025-09-05 11:04:08 瀏覽:235
我的世界伺服器怎麼開外掛中國版 發布:2025-09-05 10:53:26 瀏覽:608
php數據導入excel 發布:2025-09-05 10:52:47 瀏覽:618
如何配置系統主機名 發布:2025-09-05 10:42:52 瀏覽:281
androidlistview頭部 發布:2025-09-05 10:42:44 瀏覽:773
c語言字元串數組的 發布:2025-09-05 10:35:51 瀏覽:168
暢玩安卓模擬器怎麼用 發布:2025-09-05 10:30:13 瀏覽:643
貓吃飯解壓 發布:2025-09-05 10:25:09 瀏覽:467
dbca創建資料庫 發布:2025-09-05 10:24:12 瀏覽:18