I am new in this aspect.
I want to train with a series of data and predict. I have tried to long time, could you tell me what's the wrong with me?
My train data looks like this (I pick top several lines here):
-1 '13731#276 |f gender:0 age_range:2 action0:1 action1:0 action2:1 action3:0
-1 '70175#4214 |f gender:0 age_range:4 action0:0 action1:0 action2:1 action3:0
-1 '89370#2598 |f gender:1 age_range:2 action0:8 action1:0 action2:1 action3:0
1 '89371#1250 |f gender:0 age_range:2 action0:0 action1:0 action2:1 action3:0
-1 '89372#2792 |f gender:1 age_range:5 action0:0 action1:0 action2:1 action3:0
1 '89372#962 |f gender:1 age_range:5 action0:0 action1:0 action2:1 action3:0
-1 '89373#4472 |f gender:0 age_range:7 action0:5 action1:0 action2:1 action3:0
test data like this:
1 '177796#1807 |f gender:0 age_range:5 action0:5 action1:0 action2:1 action3:0
1 '155638#2445 |f gender:0 age_range:7 action0:3 action1:0 action2:1 action3:0
1 '155639#658 |f gender:1 age_range:2 action0:5 action1:0 action2:1 action3:0
1 '127479#2480 |f gender:0 age_range:7 action0:0 action1:0 action2:1 action3:0
1 '127478#1245 |f gender:0 age_range:4 action0:1 action1:0 action2:1 action3:0
1 '127473#4995 |f gender:1 age_range:4 action0:13 action1:0 action2:1 action3:0
1 '127472#45 |f gender:0 age_range:7 action0:4 action1:0 action2:1 action3:0
yes, they looks no different. I don't know if it is right. I see many people on github write them in this way.
and my vw command is as follow:
vw -d train.vw --loss_function=logistic -f model.vw
vw -d test.vw -t -i model.vw --loss_function=logistic -r shop.preds.txt
Well, the result is
-2.816693 177796#1807
-2.817430 155638#2445
-2.981194 155639#658
-2.821442 127479#2480
-2.823012 127478#1245
-2.968556 127473#4995
-2.816092 127472#45
-2.820939 127471#4010
-2.975476 127470#593
-2.820105 155634#4103
-2.799539 155635#2980
-3.139279 127475#1469
I don't know why is that, the number become less than -2, in fact my ideal result is like:
202178#1665,0.67
156148#4730,0.50
132360#2459,0.24
132360#144,0.99
180387#1534,0.48
187963#1360,0.19
158187#2534,0.54
188206#4890,0.70
At least I want the number to be correct, but it is all 1.
Could you tell me how to fix this? Thanks!
If you want to predict probabilities, then instead of vw -d test.vw -t -i model.vw --loss_function=logistic -r shop.preds.txt you should use
vw -d test.vw -t -i model.vw --loss_function=logistic --link=logistic -p shop.preds.txt
If you want to get the most probable label (-1 or +1), use
vw -d test.vw -t -i model.vw --loss_function=logistic --binary -p shop.preds.txt
See https://github.com/JohnLangford/vowpal_wabbit/wiki/Predicting-probabilities
Related
When I start my server with docker-compose up rails command, there is shows a [Webpacker] Compiling... which goes for about a ten seconds and then it gives this output
rails_1 | [Webpacker] Compilation failed:
rails_1 | Hash: e1d0cde8c28f1ff2a0a2
rails_1 | Version: webpack 4.46.0
rails_1 | Time: 19214ms
rails_1 | Built at: 03/17/2022 11:42:08 AM
rails_1 | Asset Size Chunks Chunk Names
rails_1 | js/application-e9da3e13d93a6c1058bd.js 5.54 MiB application [emitted] [immutable] application
rails_1 | js/application-e9da3e13d93a6c1058bd.js.map 5.14 MiB application [emitted] [dev] application
rails_1 | js/board-48bc7b5cbf760044039a.js 3.77 MiB board [emitted] [immutable] board
rails_1 | js/board-48bc7b5cbf760044039a.js.map 3.88 MiB board [emitted] [dev] board
rails_1 | js/provider-c2471a0d3965537a7a3f.js 3.74 MiB provider [emitted] [immutable] provider
rails_1 | js/provider-c2471a0d3965537a7a3f.js.map 3.85 MiB provider [emitted] [dev] provider
rails_1 | js/server_rendering-a1f5869b9a9be21b0a76.js 9.15 MiB server_rendering [emitted] [immutable] server_rendering
rails_1 | js/server_rendering-a1f5869b9a9be21b0a76.js.map 8.51 MiB server_rendering [emitted] [dev] server_rendering
rails_1 | manifest.json 1.61 KiB [emitted]
rails_1 | media/fonts/OpenSans-Regular-d7d7b8359eeb9cddfba6cd4cef3c1702.ttf 127 KiB [emitted]
rails_1 | media/fonts/OpenSans-SemiBold-d7261533b9a545ddc769dc2fe86dc40e.ttf 127 KiB [emitted]
rails_1 | media/images/logo-cfbdadde.png 1.07 KiB [emitted]
rails_1 | Entrypoint application = js/application-e9da3e13d93a6c1058bd.js js/application-e9da3e13d93a6c1058bd.js.map
rails_1 | Entrypoint board = js/board-48bc7b5cbf760044039a.js js/board-48bc7b5cbf760044039a.js.map
rails_1 | Entrypoint provider = js/provider-c2471a0d3965537a7a3f.js js/provider-c2471a0d3965537a7a3f.js.map
rails_1 | Entrypoint server_rendering = js/server_rendering-a1f5869b9a9be21b0a76.js js/server_rendering-a1f5869b9a9be21b0a76.js.map
rails_1 | [./app/assets/images/logo.png] 76 bytes {application} {server_rendering} [built]
rails_1 | [./app/javascript/components sync recursive ^\.\/.*$] ./app/javascript/components sync ^\.\/.*$ 9.05 KiB {server_rendering} [built]
rails_1 | [./app/javascript/components/app.tsx] 5.63 KiB {application} {server_rendering} [built]
rails_1 | [./app/javascript/components/provider/index.js] 49 bytes {provider} {server_rendering} [built]
rails_1 | [./app/javascript/components/subscription/index.js] 62 bytes {board} {server_rendering} [built]
rails_1 | [./app/javascript/packs/application.js] 1.71 KiB {application} [built]
rails_1 | [./app/javascript/packs/board.jsx] 1.13 KiB {board} [built]
rails_1 | [./app/javascript/packs/provider.js] 381 bytes {provider} [built]
rails_1 | [./app/javascript/packs/server_rendering.js] 301 bytes {server_rendering} [built]
rails_1 | [./app/javascript/utils/apollo.js] 5.34 KiB {board} {provider} {server_rendering} [built]
rails_1 | [./node_modules/react-apollo/lib/react-apollo.esm.js] 487 bytes {board} {provider} {server_rendering} [built]
rails_1 | [./node_modules/react-dom/index.js] 1.32 KiB {application} {board} {provider} {server_rendering} [built]
rails_1 | [./node_modules/react-router-dom/index.js] 15.2 KiB {application} {server_rendering} [built]
rails_1 | [./node_modules/react/index.js] 189 bytes {application} {board} {provider} {server_rendering} [built]
rails_1 | [./node_modules/react_ujs/react_ujs/index.js] 6.3 KiB {server_rendering} [built]
rails_1 | + 1550 hidden modules
rails_1 |
rails_1 | WARNING in ./node_modules/graphql-ruby-client/sync/prepareRelay.js 41:22-47
rails_1 | Critical dependency: the request of a dependency is an expression
rails_1 | # ./node_modules/graphql-ruby-client/sync/generateClient.js
rails_1 | # ./node_modules/graphql-ruby-client/index.js
rails_1 | # ./app/javascript/utils/apollo.js
rails_1 | # ./app/javascript/packs/board.jsx
rails_1 |
rails_1 | ERROR in ./app/javascript/components/action-item/action-item.jsx 303:13-28
rails_1 | "export 'handleKeyPress' (imported as '_handleKeyPress') was not found in '../../utils/helpers'
rails_1 | # ./app/javascript/components sync ^\.\/.*$
rails_1 | # ./app/javascript/packs/server_rendering.js
rails_1 |
rails_1 | ERROR in ./app/javascript/components/new-action-item/new-action-item.jsx 141:13-28
rails_1 | "export 'handleKeyPress' (imported as '_handleKeyPress') was not found in '../../utils/helpers'
rails_1 | # ./app/javascript/components sync ^\.\/.*$
rails_1 | # ./app/javascript/packs/server_rendering.js
rails_1 |
rails_1 | ERROR in ./app/javascript/components/new-card-body/new-card-body.jsx 188:13-27
rails_1 | "export 'handleKeyPress' was not found in '../../utils/helpers'
rails_1 | # ./app/javascript/components sync ^\.\/.*$
rails_1 | # ./app/javascript/packs/server_rendering.js
rails_1 |
rails_1 | ERROR in ./app/javascript/components/comments-dropdown/comments-dropdown.jsx 181:13-27
rails_1 | "export 'handleKeyPress' was not found in '../../utils/helpers'
rails_1 | # ./app/javascript/components sync ^\.\/.*$
rails_1 | # ./app/javascript/packs/server_rendering.js
rails_1 |
rails_1 | ERROR in ./app/javascript/components/card-body/card-body.jsx 222:13-27
rails_1 | "export 'handleKeyPress' was not found in '../../utils/helpers'
rails_1 | # ./app/javascript/components sync ^\.\/.*$
rails_1 | # ./app/javascript/packs/server_rendering.js
rails_1 |
rails_1 | ERROR in ./node_modules/graphql-ruby-client/subscriptions/ActionCableLink.js
rails_1 | Module not found: Error: Can't resolve '#apollo/client/core' in '/app/node_modules/graphql-ruby-client/subscriptions'
rails_1 | # ./node_modules/graphql-ruby-client/subscriptions/ActionCableLink.js 33:13-43
rails_1 | # ./node_modules/graphql-ruby-client/index.js
rails_1 | # ./app/javascript/utils/apollo.js
rails_1 | # ./app/javascript/packs/board.jsx
rails_1 |
rails_1 | ERROR in ./node_modules/graphql-ruby-client/subscriptions/PusherLink.js
rails_1 | Module not found: Error: Can't resolve '#apollo/client/core' in '/app/node_modules/graphql-ruby-client/subscriptions'
rails_1 | # ./node_modules/graphql-ruby-client/subscriptions/PusherLink.js 68:13-43
rails_1 | # ./node_modules/graphql-ruby-client/index.js
rails_1 | # ./app/javascript/utils/apollo.js
rails_1 | # ./app/javascript/packs/board.jsx
rails_1 |
rails_1 | ERROR in ./node_modules/graphql-ruby-client/subscriptions/AblyLink.js
rails_1 | Module not found: Error: Can't resolve '#apollo/client/core' in '/app/node_modules/graphql-ruby-client/subscriptions'
rails_1 | # ./node_modules/graphql-ruby-client/subscriptions/AblyLink.js 70:13-43
rails_1 | # ./node_modules/graphql-ruby-client/index.js
rails_1 | # ./app/javascript/utils/apollo.js
rails_1 | # ./app/javascript/packs/board.jsx
rails_1 |
rails_1 | ERROR in ./node_modules/graphql-ruby-client/subscriptions/createRelaySubscriptionHandler.js
rails_1 | Module not found: Error: Can't resolve 'relay-runtime' in '/app/node_modules/graphql-ruby-client/subscriptions'
rails_1 | # ./node_modules/graphql-ruby-client/subscriptions/createRelaySubscriptionHandler.js 14:22-46
rails_1 | # ./node_modules/graphql-ruby-client/index.js
rails_1 | # ./app/javascript/utils/apollo.js
rails_1 | # ./app/javascript/packs/board.jsx
I found that answer that helps me a bit. But now, when I start ./bin/webpack-dev-server in a separate window, it gives me this
./bin/webpack-dev-server
<i> [webpack-dev-server] Project is running at:
<i> [webpack-dev-server] Loopback: http://localhost:8080/
<i> [webpack-dev-server] On Your Network (IPv4): http://192.168.0.13:8080/
<i> [webpack-dev-server] On Your Network (IPv6): http://[fe80::b537:10:b5dc:9bb5]:8080/
<i> [webpack-dev-server] Content not from webpack is served from '/home/yarikhrom/Work/retrospective/retrospective/public' directory
node:internal/crypto/hash:67
this[kHandle] = new _Hash(algorithm, xofLen);
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:135:10)
at module.exports (/home/yarikhrom/Work/retrospective/retrospective/node_modules/webpack/lib/util/createHash.js:135:53)
at NormalModule._initBuildHash (/home/yarikhrom/Work/retrospective/retrospective/node_modules/webpack/lib/NormalModule.js:417:16)
at handleParseError (/home/yarikhrom/Work/retrospective/retrospective/node_modules/webpack/lib/NormalModule.js:471:10)
at /home/yarikhrom/Work/retrospective/retrospective/node_modules/webpack/lib/NormalModule.js:503:5
at /home/yarikhrom/Work/retrospective/retrospective/node_modules/webpack/lib/NormalModule.js:358:12
at /home/yarikhrom/Work/retrospective/retrospective/node_modules/loader-runner/lib/LoaderRunner.js:373:3
at iterateNormalLoaders (/home/yarikhrom/Work/retrospective/retrospective/node_modules/loader-runner/lib/LoaderRunner.js:214:10)
at iterateNormalLoaders (/home/yarikhrom/Work/retrospective/retrospective/node_modules/loader-runner/lib/LoaderRunner.js:221:10) {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
Node.js v17.7.1
Also i found in documentation that I need to work with file webpack.config.js, but I don't have this one in my project. So, what can I do to make my webpacker compile insantly when I start a project or making changes in a frontend part?
docker-compose.yml:
version: "3.7"
services:
app: &app
build:
context: .
dockerfile: ./.dockerdev/Dockerfile
args:
RUBY_VERSION: "2.7.0"
PG_MAJOR: "11"
NODE_MAJOR: "12"
YARN_VERSION: "1.19.1"
BUNDLER_VERSION: "2.2.25"
image: example-dev:1.0.0
tmpfs:
- /tmp
backend: &backend
<<: *app
stdin_open: true
tty: true
volumes:
- .:/app:cached
- ./node_modules:/app/node_modules
- .dockerdev/.psqlrc:/root/.psqlrc:ro
- .dockerdev/bundle:/bundle
- rails_cache:/app/tmp/cache
- packs:/app/public/packs
environment:
- NODE_ENV=development
- RAILS_ENV=${RAILS_ENV:-development}
- REDIS_URL=redis://redis:6379/
- DATABASE_URL=postgres://postgres:postgres#postgres:5432
- POSTGRESQL_HOST=postgres
- POSTGRESQL_PORT=5432
- WEBPACKER_DEV_SERVER_HOST=webpacker
- WEB_CONCURRENCY=1
- HISTFILE=/app/log/.bash_history
- PSQL_HISTFILE=/app/log/.psql_history
- EDITOR=vi
depends_on:
- postgres
- redis
runner:
<<: *backend
command: /bin/bash
ports:
- "3000:3000"
- "3002:3002"
rails:
<<: *backend
command: bundle exec rails server -b 0.0.0.0
ports:
- "3000:3000"
postgres:
image: postgres:11.1
volumes:
- .dockerdev/.psqlrc:/root/.psqlrc:ro
- ./log:/root/log:cached
- postgres:/var/lib/postgresql/data
environment:
- PSQL_HISTFILE=/root/log/.psql_history
ports:
- "5432:5432"
redis:
image: redis:3.2-alpine
volumes:
- redis:/data
ports:
- "6379:6379"
webpacker:
<<: *app
command: ./bin/webpack-dev-server
ports:
- "3035:3035"
volumes:
- .:/app:cached
- ./node_modules:/app/node_modules
- .dockerdev/bundle:/bundle
- packs:/app/public/packs
environment:
- NODE_ENV=${NODE_ENV:-development}
- RAILS_ENV=${RAILS_ENV:-development}
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
redis:
node_modules:
rails_cache:
packs:
I think that i had this issue too, i fixed this putting rails and webpacker in the same container, is not the best solution but it works, something like this:
docker-compose.yml
version: '3'
services:
web:
build:
context: .
dockerfile: ./docker/development/Dockerfile
command: ["sh", "./docker/development/scripts/start-rails-development-server.sh"]
volumes:
- .:/app
ports:
- 3000:3000
- 3035:3035
depends_on:
- database
start-rails-development-server.sh
#!/bin/sh
echo 'Starting webpack dev server ...' && bin/webpack-dev-server &
echo 'starting Rails server ...' && rm -f tmp/pids/server.pid && bundle exec rails s -b 0.0.0.0
I am trying to run the clara train example, but when I execute the startClaraTrainNoteBooks.sh, the container cannot find the nvidia driver.
I already know that the script executes docker-compose.yml. So I tested whether docker-compose can found the nvidia driver:
services:
test:
image: nvidia/cuda:10.2-base
command: nvidia-smi
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
device_ids: ['0']
Output:
USER#test:~$ docker-compose up
WARNING: Found orphan containers (hp_nvsmi_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Starting hp_test_1 ... done
Attaching to hp_test_1
test_1 | Mon Jun 7 09:01:44 2021
test_1 | +-----------------------------------------------------------------------------+
test_1 | | NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 |
test_1 | |-------------------------------+----------------------+----------------------+
test_1 | | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
test_1 | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
test_1 | | | | MIG M. |
test_1 | |===============================+======================+======================|
test_1 | | 0 GeForce RTX 206... Off | 00000000:01:00.0 Off | N/A |
test_1 | | 0% 34C P8 17W / 215W | 100MiB / 7979MiB | 0% Default |
test_1 | | | | N/A |
test_1 | +-------------------------------+----------------------+----------------------+
test_1 |
test_1 | +-----------------------------------------------------------------------------+
test_1 | | Processes: |
test_1 | | GPU GI CI PID Type Process name GPU Memory |
test_1 | | ID ID Usage |
test_1 | |=============================================================================|
test_1 | +-----------------------------------------------------------------------------+
hp_test_1 exited with code 0
But the startClaraTrainNoteBooks.sh cna not find it.
root#claratrain:/claraDevDay# nvidia-smi
root#claratrain:/claraDevDay#
Actually, startDocker.sh can find the driver.
root#c7c2d5597eb8:/claraDevDay# nvidia-smi
Mon Jun 7 09:11:43 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 206... Off | 00000000:01:00.0 Off | N/A |
| 0% 35C P8 17W / 215W | 100MiB / 7979MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
root#c7c2d5597eb8:/claraDevDay#
What should I do?
The docker-compose.yml script need to rewrite like this and working:
# SPDX-License-Identifier: Apache-2.0
version: "3.8"
services:
claratrain:
container_name: claradevday-pt
hostname: claratrain
##### use vanilla clara train docker
#image: nvcr.io/nvidia/clara-train-sdk:v4.0
##### to build image with GPU dashboard inside jupyter lab
build:
context: ./dockerWGPUDashboardPlugin/ # Project root
dockerfile: ./Dockerfile # Relative to context
image: clara-train-nvdashboard:v4.0
depends_on:
- tritonserver
ports:
- "3030:8888" # Jupyter lab port
- "3031:5000" # AIAA port
ipc: host
volumes:
- ${TRAIN_DEV_DAY_ROOT}:/claraDevDay/
- /raid/users/aharouni/data:/data/
command: "jupyter lab /claraDevDay --ip 0.0.0.0 --allow-root --no-browser --config /claraDevDay/scripts/jupyter_notebook_config.py"
# command: tail -f /dev/null
# tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [ gpu ]
# To specify certain GPU uncomment line below
#device_ids: ['0,3']
#############################################################
tritonserver:
image: nvcr.io/nvidia/tritonserver:21.02-py3
container_name: aiaa-triton
hostname: tritonserver
restart: unless-stopped
command: >
sh -c "chmod 777 /triton_models &&
/opt/tritonserver/bin/tritonserver \
--model-store /triton_models \
--model-control-mode="poll" \
--repository-poll-secs=5 \
--log-verbose ${TRITON_VERBOSE}"
volumes:
- ${TRAIN_DEV_DAY_ROOT}/AIAA/workspace/triton_models:/triton_models
# shm_size: 1gb
# ulimits:
# memlock: -1
# stack: 67108864
# logging:
# driver: json-file
Goal: I want to analyse the headers from a topic and I am looking for some straighforward way to see the header. So I don't want to develop an extra application or extense code just for that. Any straighforward tool to see the header will be usefull.
From this question I read it is possible with kafkacat that way
kafkacat -b kafka-broker:9092 -t my_topic_name -C \
-f '\nKey (%K bytes): %k
Value (%S bytes): %s
Timestamp: %T
Partition: %p
Offset: %o
Headers: %h\n'
So I desire to start kafkacat as part of my docker-compose and my first try is
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.4.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.4.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
kafka-tools:
image: confluentinc/cp-kafka:5.4.0
hostname: kafka
container_name: kafka
command: ["tail", "-f", "/dev/null"]
network_mode: "host"
kafkacat:
image: confluentinc/cp-kafkacat
command:
- bash
- -c
links:
- broker
And I got
broker | [2020-12-11 10:46:13,084] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController)
broker | [2020-12-11 10:46:13,084] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)
confluentinc_kafkacat_1 exited with code 2
broker | [2020-12-11 10:51:13,085] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
Second tentative base on kafkacat tutorial
C:\Users\DEMETRC>docker run --tty confluentinc/cp-kafkacat kafkacat -b localhost:9092 -L
% ERROR: Failed to acquire metadata: Local: Broker transport failure
PS: my docker kafka is available in localhost:9092 without docker network. In the example from tutorial
its kafka is available in kafka:29092 on the Docker network docker-compose_default
Any clue how add kafkacat to my docker-compose? Any other suggestion to investigate the message header?
*** first edit after Robin Moffatt's answer
I edited docker-compose adding Robin's suggestion
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.4.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.4.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
kafka-tools:
image: confluentinc/cp-kafka:5.4.0
hostname: kafka
container_name: kafka
command: ["tail", "-f", "/dev/null"]
network_mode: "host"
kafkacat:
image: edenhill/kafkacat:1.6.0
container_name: kafkacat
links:
- broker
entrypoint:
- /bin/sh
- -c
- |
apk add jq;
while [ 1 -eq 1 ];do sleep 60;done
Here is my topic description
# kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic demotopic
Created topic demotopic.
# kafka-topics --describe --bootstrap-server localhost:9092 --topic demotopic
Topic: demotopic PartitionCount: 1 ReplicationFactor: 1 Configs:
Topic: demotopic Partition: 0 Leader: 1 Replicas: 1 Isr: 1
#
And the error I am getting trying to use kafkacat
/ # kafkacat -b localhost:9092 -t demotopic -C -f '\nKey (%K bytes): %k Value (%S bytes): %s Timestamp: %T Partition: %p
Offset: %o Headers: %h\n'
%3|1607709622.155|FAIL|rdkafka#consumer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
%3|1607709623.155|FAIL|rdkafka#consumer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
% ERROR: Failed to query metadata for topic demotopic: Local: Broker transport failure
/ #
*** Second edit
Now I can use kafkacat for listing all topics but I am still blocked to get the message headers
/ # kafkacat -L -b broker:9092
Metadata for all topics (from broker -1: broker:9092/bootstrap):
1 brokers:
broker 1 at localhost:9092 (controller)
4 topics:
topic "demotopic" with 1 partitions:
partition 0, leader 1, replicas: 1, isrs: 1
topic "_confluent-license" with 1 partitions:
/ # kafkacat -L -b broker:9092
Metadata for all topics (from broker -1: broker:9092/bootstrap):
1 brokers:
broker 1 at localhost:9092 (controller)
4 topics:
topic "demotopic" with 1 partitions:
partition 0, leader 1, replicas: 1, isrs: 1
topic "_confluent-license" with 1 partitions:
partition 0, leader 1, replicas: 1, isrs: 1
topic "_confluent-metrics" with 12 partitions:
partition 0, leader 1, replicas: 1, isrs: 1
partition 1, leader 1, replicas: 1, isrs: 1
partition 2, leader 1, replicas: 1, isrs: 1
partition 3, leader 1, replicas: 1, isrs: 1
partition 4, leader 1, replicas: 1, isrs: 1
partition 5, leader 1, replicas: 1, isrs: 1
partition 6, leader 1, replicas: 1, isrs: 1
partition 7, leader 1, replicas: 1, isrs: 1
partition 8, leader 1, replicas: 1, isrs: 1
partition 9, leader 1, replicas: 1, isrs: 1
partition 10, leader 1, replicas: 1, isrs: 1
partition 11, leader 1, replicas: 1, isrs: 1
topic "__confluent.support.metrics" with 1 partitions:
partition 0, leader 1, replicas: 1, isrs: 1
/ # kafkacat -b broker:9092 -t demotopic -C -f '\nKey (%K bytes): %k Value (%S bytes): %s Timestamp: %T Partition: %p Of
fset: %o Headers: %h\n'
%3|1607719862.133|FAIL|rdkafka#consumer-1| [thrd:localhost:9092/1]: localhost:9092/1: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
% ERROR: Local: Broker transport failure: localhost:9092/1: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
Any thing wrong with this kafkacat script?
kafkacat -b broker:9092 -t demotopic -C -f '\nKey (%K bytes): %k Value (%S bytes): %s Timestamp: %T Partition: %p Offset: %o Headers: %h\n'
confluentinc_kafkacat_1 exited with code 2 is because the container exited, since you overrode the startup command to just launch bash; which it does, and then exits
command:
- bash
- -c
This file shows a working example of what you need, in which the container will stay running
entrypoint:
- /bin/sh
- -c
- |
apk add jq;
while [ 1 -eq 1 ];do sleep 60;done
To Aydin's point that you don't need kafkacat in Docker - you can do it locally, but I find it easier to include in the Docker Compose so that you're not dependent on a local install.
If you want to run kafkacat outside of Docker Compose but still in Docker with docker run you can, but bear in mind the networking implications. If you try to use localhost then that's relative to the container itself, i.e. where kafkacat is running. Instead you need to make the Kafka broker accessible to the kafkacat container, e.g. by adding to the host network:
docker run --network host --interactive --rm edenhill/kafkacat:1.6.0 -b localhost:9092 -L
I'm trying to use Spring Boot with Kafka and ZooKeeper with Docker :
docker-compose.yml:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
restart: always
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
980e6b09f4e3 wurstmeister/kafka "start-kafka.sh" 29 minutes ago Up 29 minutes 0.0.0.0:9092->9092/tcp samplespringkafkaproducerconsumermaster_kafka_1
64519d4808aa wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 2 hours ago Up 29 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp samplespringkafkaproducerconsumermaster_zookeeper_1
docker-compose up output log:
kafka_1 | [2018-01-12 13:14:49,545] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,546] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,546] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,547] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,547] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,548] INFO Client environment:os.version=4.9.60-linuxkit-aufs (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,548] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,549] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,549] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,552] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#1534f01b (org.apache.zookeeper.ZooKeeper)
kafka_1 | [2018-01-12 13:14:49,574] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka_1 | [2018-01-12 13:14:49,578] INFO Opening socket connection to server samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2018-01-12 13:14:49,591 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#192] - Accepted socket connection from /192.168.32.3:51466
kafka_1 | [2018-01-12 13:14:49,593] INFO Socket connection established to samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1 | 2018-01-12 13:14:49,600 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#928] - Client attempting to establish new session at /192.168.32.3:51466
zookeeper_1 | 2018-01-12 13:14:49,603 [myid:] - INFO [SyncThread:0:FileTxnLog#203] - Creating new log file: log.fd
zookeeper_1 | 2018-01-12 13:14:49,613 [myid:] - INFO [SyncThread:0:ZooKeeperServer#673] - Established session 0x160ea8232b00000 with negotiated timeout 6000 for client /192.168.32.3:51466
kafka_1 | [2018-01-12 13:14:49,616] INFO Session establishment complete on server samplespringkafkaproducerconsumermaster_zookeeper_1.samplespringkafkaproducerconsumermaster_default/192.168.32.2:2181, sessionid = 0x160ea8232b00000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka_1 | [2018-01-12 13:14:49,619] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
kafka_1 | [2018-01-12 13:14:49,992] INFO Cluster ID = Fgy9ybPPQQ-QdLINzHpmVA (kafka.server.KafkaServer)
kafka_1 | [2018-01-12 13:14:50,003] WARN No meta.properties file under dir /kafka/kafka-logs-980e6b09f4e3/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1 | [2018-01-12 13:14:50,065] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,065] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,067] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka_1 | [2018-01-12 13:14:50,167] INFO Log directory '/kafka/kafka-logs-980e6b09f4e3' not found, creating it. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,183] INFO Loading logs. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,199] INFO Logs loading complete in 15 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,283] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,291] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka_1 | [2018-01-12 13:14:50,633] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka_1 | [2018-01-12 13:14:50,639] INFO [SocketServer brokerId=1005] Started 1 acceptor threads (kafka.network.SocketServer)
kafka_1 | [2018-01-12 13:14:50,673] INFO [ExpirationReaper-1005-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,674] INFO [ExpirationReaper-1005-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,675] INFO [ExpirationReaper-1005-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,691] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka_1 | [2018-01-12 13:14:50,753] INFO [ExpirationReaper-1005-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,757] INFO [ExpirationReaper-1005-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,762] INFO [ExpirationReaper-1005-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka_1 | [2018-01-12 13:14:50,777] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,791] INFO [GroupCoordinator 1005]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2018-01-12 13:14:50,791] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,793] INFO [GroupCoordinator 1005]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka_1 | [2018-01-12 13:14:50,798] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1 | [2018-01-12 13:14:50,811] INFO [ProducerId Manager 1005]: Acquired new producerId block (brokerId:1005,blockStartProducerId:5000,blockEndProducerId:5999) by writing to Zk with path version 6 (kafka.coordinator.transaction.ProducerIdManager)
kafka_1 | [2018-01-12 13:14:50,848] INFO [TransactionCoordinator id=1005] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2018-01-12 13:14:50,850] INFO [Transaction Marker Channel Manager 1005]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
kafka_1 | [2018-01-12 13:14:50,850] INFO [TransactionCoordinator id=1005] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka_1 | [2018-01-12 13:14:50,949] INFO Creating /brokers/ids/1005 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
zookeeper_1 | 2018-01-12 13:14:50,952 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:create cxid:0x70 zxid:0x102 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper_1 | 2018-01-12 13:14:50,952 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:create cxid:0x71 zxid:0x103 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
kafka_1 | [2018-01-12 13:14:50,957] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka_1 | [2018-01-12 13:14:50,959] INFO Registered broker 1005 at path /brokers/ids/1005 with addresses: EndPoint(localhost,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka_1 | [2018-01-12 13:14:50,961] WARN No meta.properties file under dir /kafka/kafka-logs-980e6b09f4e3/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka_1 | [2018-01-12 13:14:50,992] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2018-01-12 13:14:50,993] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka_1 | [2018-01-12 13:14:51,004] INFO [KafkaServer id=1005] started (kafka.server.KafkaServer)
zookeeper_1 | 2018-01-12 13:14:51,263 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#649] - Got user-level KeeperException when processing sessionid:0x160ea8232b00000 type:delete cxid:0xe3 zxid:0x105 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka_1 | [2018-01-12 13:24:50,793] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka_1 | [2018-01-12 13:34:50,795] INFO [GroupMetadataManager brokerId=1005] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
Kafka maven dependency in Producer and Consumer:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.9.RELEASE</version>
<relativePath/>
</parent>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
application.properties in Producer:
spring.kafka.producer.bootstrap-servers=0.0.0.0:9092
spring.kafka.consumer.topic=kafka_topic
server.port=8080
application.properties in Consumer:
spring.kafka.consumer.bootstrap-servers=0.0.0.0:9092
spring.kafka.consumer.group-id=WorkUnitApp
spring.kafka.consumer.topic=kafka_topic
server.port=8081
Consumer:
#Component
public class Consumer {
private static final Logger LOGGER = LoggerFactory.getLogger(Consumer.class);
#KafkaListener(topics = "${spring.kafka.consumer.topic}")
public void receive(ConsumerRecord<?, ?> consumerRecord) {
LOGGER.info("received payload='{}'", consumerRecord.toString());
}
}
Producer:
#Component
public class Producer {
private static final Logger LOGGER = LoggerFactory.getLogger(Producer.class);
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void send(String topic, String payload) {
LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
kafkaTemplate.send(topic, payload);
}
}
ConsumerConfig log:
2018-01-12 15:25:48.220 INFO 20919 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [0.0.0.0:9092]
check.crcs = true
client.id = consumer-1
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = WorkUnitApp
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ProducerConfig log:
2018-01-12 15:26:27.956 INFO 20924 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [0.0.0.0:9092]
buffer.memory = 33554432
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
When i try to send a message I get an exception:
producer.send("kafka_topic", "test")
exception log:
2018-01-12 15:26:27.975 INFO 20924 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
2018-01-12 15:26:27.975 INFO 20924 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
2018-01-12 15:26:58.152 ERROR 20924 --- [ad | producer-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='test' to topic kafka_topic:
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for kafka_topic-0 due to 30033 ms has passed since batch creation plus linger time
How to fix it ?
Problem is not with sending key as null, Connection to a broker may not be established
try using local Kafka installation.
If you are using mac Docker for mac having some networking
limitations
https://docs.docker.com/docker-for-mac/networking/#known-limitations-use-cases-and-workarounds
i ran into the same issue. the issue was with my dockercompose file. not 100% but i think KAFKA_ADVERTISED_HOST_NAME and KAFKA_ADVERTISED_LISTENERS both need to refernce localhost. my working compose file.
version: '2'
networks:
sb_net:
driver: bridge
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
hostname: zookeeper
networks:
- sb_net
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
networks:
- sb_net
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
error on mindfulness, need to add a link:
links:
- zookeeper:zookeeper
full docker-compose.yml:
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
restart: always
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka
container_name: kafka
restart: always
ports:
- 9092:9092
depends_on:
- zookeeper
links:
- zookeeper:zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
I got same problem, Kafka only allow 127.0.0.0/localhost by default.
My solution:
Add this line in Kafka server.properties, and restart Kafka service
listeners=PLAINTEXT://192.168.31.72:9092
I'm trying to test the latest version of Hyperledger Fabric, the v1 in incubation. I have an issue with the latest version of Hyperledger Fabric.
I'm following the instructions here to install Fabric:
https://hyperledger-fabric.readthedocs.io/en/latest/asset_setup/
I'm using Docker to spawn network entities & create/join a channel :
sudo docker --version
Docker version 1.13.1, build 092cba3
sudo docker-compose --version
docker-compose version 1.11.2, build dfed245
When I'm performing :
sudo docker-compose -f docker-compose-gettingstarted.yml up
My five containers are running :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f1b6d6128d43 sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "sh -c './channel_..." 21 minutes ago Up About a minute cli
8f9df755c160 sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "peer node start -..." 21 minutes ago Up About a minute 0.0.0.0:8056->7051/tcp peer2
2de6ee624d28 sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "peer node start -..." 21 minutes ago Up About a minute 0.0.0.0:8055->7051/tcp peer1
31ac53b6e5db sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0 "peer node start -..." 21 minutes ago Up About a minute 0.0.0.0:8051->7051/tcp, 0.0.0.0:8053->7053/tcp peer0
d98fc2a8652f sfhackfest22017/fabric-ca:x86_64-0.7.0-snapshot-6294c57 "sh -c 'sleep 10; ..." 21 minutes ago Up About a minute 0.0.0.0:8054->7054/tcp ca
07dcfceb86cc sfhackfest22017/fabric-orderer:x86_64-0.7.0-snapshot-c7b3fe0 "orderer" 21 minutes ago Up About a minute 0.0.0.0:8050->7050/tcp orderer
but I get this error on the last line :
2017-03-01 14:55:32.183 UTC [msp] newIdentity -> INFO 016 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.270 UTC [peer] GetManagerForChain -> INFO 017 Created new msp manager for chain testchainid
cli | 2017-03-01 14:55:32.270 UTC [msp] Setup -> INFO 018 Setting up the MSP manager (1 msps)
cli | 2017-03-01 14:55:32.270 UTC [msp] Setup -> INFO 019 Setting up MSP
cli | 2017-03-01 14:55:32.270 UTC [msp] NewBccspMsp -> INFO 01a Creating BCCSP-based MSP instance
cli | 2017-03-01 14:55:32.270 UTC [msp] Setup -> INFO 01b Setting up MSP instance DEFAULT
cli | 2017-03-01 14:55:32.270 UTC [msp] newIdentity -> INFO 01c Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.270 UTC [msp] newIdentity -> INFO 01d Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.271 UTC [msp] newIdentity -> INFO 01e Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.271 UTC [msp] newIdentity -> INFO 01f Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] newIdentity -> INFO 020 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] newIdentity -> INFO 021 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] newIdentity -> INFO 022 Creating identity instance for ID &{DEFAULT IDENTITY}
cli | 2017-03-01 14:55:32.275 UTC [msp] Setup -> INFO 023 MSP manager setup complete, setup 1 msps
cli | 2017-03-01 14:55:32.275 UTC [logging] InitFromViper -> DEBU 024 Setting default logging level to DEBUG for command 'channel'
cli | 2017-03-01 14:55:32.275 UTC [peer] GetLocalMSP -> INFO 025 Returning existing local MSP
cli | 2017-03-01 14:55:32.275 UTC [msp] GetDefaultSigningIdentity -> INFO 026 Obtaining default signing identity
cli | Error: Error getting broadcast client: Error connecting to orderer:7050 due to grpc: timed out when dialing
How can I fix it ?
And when I perform :
sudo docker exec -it cli bash
[sudo] Mot de passe de blockchain :
root#f1b6d6128d43:/opt/gopath/src/github.com/hyperledger/fabric/peer# cat results.txt
ERROR on CHANNEL CREATION
Here is my Docker_compose.yml :
version: '2'
networks:
bridge:
services:
ccenv_latest:
container_name: ccenv_latest
build: ./ccenv
image: hyperledger/fabric-ccenv:latest
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ccenv_snapshot:
container_name: ccenv_snapshot
build: ./ccenv
image: hyperledger/fabric-ccenv:x86_64-0.7.0-snapshot-c7b3fe0
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ca:
image: sfhackfest22017/fabric-ca:x86_64-0.7.0-snapshot-6294c57
ports:
- 8054:7054
environment:
- CA_CERTIFICATE=peerOrg0_cert.pem
- CA_KEY_CERTIFICATE=peerOrg0_pk.pem
volumes:
- ./tmp/ca:/.fabric-ca
command: sh -c 'sleep 10; fabric-ca server start -ca /.fabric-ca/$$CA_CERTIFICATE -ca-key /.fabric-ca/$$CA_KEY_CERTIFICATE -config /etc/hyperledger/fabric-ca/server-config.json -address "0.0.0.0"'
container_name: ca
orderer:
container_name: orderer
image: sfhackfest22017/fabric-orderer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- ORDERER_GENERAL_LEDGERTYPE=ram
- ORDERER_GENERAL_BATCHTIMEOUT=10s
- ORDERER_GENERAL_BATCHSIZE_MAXMESSAGECOUNT=10
- ORDERER_GENERAL_MAXWINDOWSIZE=1000
- ORDERER_GENERAL_ORDERERTYPE=solo
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_RAMLEDGER_HISTORY_SIZE=100
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 8050:7050
networks:
- bridge
peer0:
container_name: peer0
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_NETWORKID=peer0
- CORE_NEXT=true
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_ID=peer0
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_GOSSIP_ORGLEADER=true
- CORE_PEER_GOSSIP_IGNORESECURITY=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start --peer-defaultchain=false
ports:
- 8051:7051
- 8053:7053
links:
- orderer:orderer
volumes:
- /var/run/:/host/var/run/
- ./tmp/peer0:/etc/hyperledger/fabric/msp/sampleconfig
networks:
- bridge
peer1:
container_name: peer1
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_NETWORKID=peer0
- CORE_NEXT=true
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_ID=peer1
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_GOSSIP_ORGLEADER=true
- CORE_PEER_GOSSIP_IGNORESECURITY=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
ports:
- 8055:7051
command: peer node start --peer-defaultchain=false
links:
- orderer:orderer
- peer0:peer0
volumes:
- /var/run/:/host/var/run/
- ./tmp/peer1:/etc/hyperledger/fabric/msp/sampleconfig
networks:
- bridge
peer2:
container_name: peer2
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_NETWORKID=peer0
- CORE_NEXT=true
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_ID=peer2
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_GOSSIP_ORGLEADER=true
- CORE_PEER_GOSSIP_IGNORESECURITY=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
ports:
- 8056:7051
command: peer node start --peer-defaultchain=false
links:
- orderer:orderer
- peer0:peer0
- peer1:peer1
volumes:
- /var/run/:/host/var/run/
- ./tmp/peer2:/etc/hyperledger/fabric/msp/sampleconfig
networks:
- bridge
cli:
container_name: cli
image: sfhackfest22017/fabric-peer:x86_64-0.7.0-snapshot-c7b3fe0
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_NEXT=true
- CORE_PEER_ID=cli
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050
- CORE_PEER_ADDRESS=peer0:7051
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: sh -c './channel_test.sh; sleep 10000'
# command: /bin/sh
links:
- orderer:orderer
- peer0:peer0
- peer1:peer1
- peer2:peer2
volumes:
- /var/run/:/host/var/run/
#in the "- <HOST>:/opt/gopath/src/github.com/hyperledger/fabric/examples/" mapping below, the HOST part
#should be modified to the path on the host. This will work as is in the Vagrant environment
- ./src/github.com/example_cc/example_cc.go:/opt/gopath/src/github.com/hyperledger/fabric/examples/example_cc.go
- ./tmp/peer3:/etc/hyperledger/fabric/msp/sampleconfig
- ./channel_test.sh:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel_test.sh
networks:
- bridge
And this is my channel_test.sh :
#!/bin/sh
# find address of peer0 in your network
PEER0_IP_ADDRESS=`perl -e 'use Socket; $a = inet_ntoa(inet_aton("peer0")); print "$a\n";'`
# create an anchor file
cat<<EOF>anchorPeer.txt
$PEER0_IP_ADDRESS
7051
-----BEGIN CERTIFICATE-----
MIICjDCCAjKgAwIBAgIUBEVwsSx0TmqdbzNwleNBBzoIT0wwCgYIKoZIzj0EAwIw
fzELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNh
biBGcmFuY2lzY28xHzAdBgNVBAoTFkludGVybmV0IFdpZGdldHMsIEluYy4xDDAK
BgNVBAsTA1dXVzEUMBIGA1UEAxMLZXhhbXBsZS5jb20wHhcNMTYxMTExMTcwNzAw
WhcNMTcxMTExMTcwNzAwWjBjMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGgg
Q2Fyb2xpbmExEDAOBgNVBAcTB1JhbGVpZ2gxGzAZBgNVBAoTEkh5cGVybGVkZ2Vy
IEZhYnJpYzEMMAoGA1UECxMDQ09QMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE
HBuKsAO43hs4JGpFfiGMkB/xsILTsOvmN2WmwpsPHZNL6w8HWe3xCPQtdG/XJJvZ
+C756KEsUBM3yw5PTfku8qOBpzCBpDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYw
FAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFOFC
dcUZ4es3ltiCgAVDoyLfVpPIMB8GA1UdIwQYMBaAFBdnQj2qnoI/xMUdn1vDmdG1
nEgQMCUGA1UdEQQeMByCCm15aG9zdC5jb22CDnd3dy5teWhvc3QuY29tMAoGCCqG
SM49BAMCA0gAMEUCIDf9Hbl4xn3z4EwNKmilM9lX2Fq4jWpAaRVB97OmVEeyAiEA
25aDPQHGGq2AvhKT0wvt08cX1GTGCIbfmuLpMwKQj38=
-----END CERTIFICATE-----
EOF
#create
echo "Creating channel on Orderer"
CORE_PEER_GOSSIP_IGNORESECURITY=true CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/fabric/msp/sampleconfig CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer channel create -c myc1 -a anchorPeer.txt >>log.txt 2>&1
cat log.txt
grep -q "Exiting" log.txt
if [ $? -ne 0 ]; then
echo "ERROR on CHANNEL CREATION" >> results.txt
exit 1
fi
echo "SUCCESSFUL CHANNEL CREATION" >> results.txt
sleep 5
TOTAL_PEERS=3
i=0
while test $i -lt $TOTAL_PEERS
do
echo "###################################### Joining peer$i"
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 CORE_PEER_ADDRESS=peer$i:7051 peer channel join -b myc1.block >>log.txt 2>&1
cat log.txt
echo '-------------------------------------------------'
grep -q "Join Result: " log.txt
if [ $? -ne 0 ]; then
echo "ERROR on JOIN CHANNEL" >> results.txt
exit 1
fi
echo "SUCCESSFUL JOIN CHANNEL on PEER$i" >> results.txt
echo "SUCCESSFUL JOIN CHANNEL on PEER$i"
i=$((i+1))
sleep 10
done
echo "Peer0 , Peer1 and Peer2 are added to the channel myc1"
cat log.txt
exit 0
i faced similar problem and found it’s DNS issue then corrected at "hosts" file. the cause is the hostname “orderer". FYR.
What OS you are using? I had the same issue running this getting started guide with VirtualBox running Ubuntu 16.04 LTS on Windows 7. After that I tried installing Ubuntu side-by-side with Windows and it worked from the first shot.
May be because of docker containers are in the different container .
If the new organization in the different network the peers can not communicate with Orderer which is in the other network.
check the networks : docker network ls