how to access kafka installed in docker with golang on host - docker

I need to use golang to access kafka,so i installed a kafka & zookepper in docker.
1.here is kafka install script:
# pull images
docker pull wurstmeister/zookeeper
docker pull wurstmeister/kafka
# run kafka & zookepper
docker run -d --name zookeeper -p 2181 -t wurstmeister/zookeeper
docker run --name kafka -e HOST_IP=localhost -e KAFKA_ADVERTISED_PORT=9092 -e KAFKA_BROKER_ID=1 -e ZK=zk -p 9092:9092 --link zookeeper:zk -t wurstmeister/kafka
# enter container
docker exec -it ${CONTAINER ID} /bin/bash
cd opt/kafka_2.11-0.10.1.1/
# make a tpoic
bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic mykafka
# start a producer in terminal-1
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mykafka
# start another terminal-2 and start a consumer
bin/kafka-console-consumer.sh --zookeeper zookeeper:2181 --topic mykafka --from-beginning
when i type some message in producer, the consumer will get it immediately.
so i assumed that the kafka is working fine
2.Now i need to create a consumer with golang to access kafka.
here is my golang demo code:
import "github.com/bsm/sarama-cluster"
func Consumer(){
// init (custom) config, enable errors and notifications
config := cluster.NewConfig()
config.Consumer.Return.Errors = true
config.Group.Return.Notifications = true
// init consumer
brokers := []string{"192.168.9.100:9092"}
topics := []string{"mykafka"}
consumer, err := cluster.NewConsumer(brokers, "my-group-id", topics, config)
if err != nil {
panic(err)
}
defer consumer.Close()
// trap SIGINT to trigger a shutdown.
signals := make(chan os.Signal, 1)
signal.Notify(signals, os.Interrupt)
// consume messages, watch errors and notifications
for {
select {
case msg, more := <-consumer.Messages():
if more {
fmt.Fprintf(os.Stdout, "%s/%d/%d\t%s\t%s\n", msg.Topic, msg.Partition, msg.Offset, msg.Key, msg.Value)
consumer.MarkOffset(msg, "") // mark message as processed
}
case err, more := <-consumer.Errors():
if more {
log.Printf("Error: %s\n", err.Error())
}
case ntf, more := <-consumer.Notifications():
if more {
log.Printf("Rebalanced: %+v\n", ntf)
}
case <-signals:
return
}
}
}
actually this demo code is copied from a github repo's demo:sarama-cluster
When running the code, i got an error:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
i did use a port map when start kafka,but just can't access it in golang
is there a way to use curl to access kafka?
i'v tried:
curl http://192.168.99.10:9092
and kafka report an error:
[2017-08-02 06:39:15,232] WARN Unexpected error from /192.168.99.1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 1195725856 larger than 104857600)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:95)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:379)
at org.apache.kafka.common.network.Selector.poll(Selector.java:326)
at kafka.network.Processor.poll(SocketServer.scala:499)
at kafka.network.Processor.run(SocketServer.scala:435)
at java.lang.Thread.run(Thread.java:748)
BTW:
i use windows 7
dcoker machine's ip :192.168.99.100
it's drived me crazy
Is there some advice or solution? appreciate!!!

If you want to create a consumer to listen a topic from Kafka, let's try that way.
I used confluent-kafka-go from the tutorial: https://github.com/confluentinc/confluent-kafka-go
This is the code on main.go file:
import (
"fmt"
"gopkg.in/confluentinc/confluent-kafka-go.v1/kafka"
)
func main() {
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": "localhost",
"group.id": "myGroup",
"auto.offset.reset": "earliest",
})
if err != nil {
panic(err)
}
c.SubscribeTopics([]string{"myTopic", "^aRegex.*[Tt]opic"}, nil)
for {
msg, err := c.ReadMessage(-1)
if err == nil {
fmt.Printf("Message on %s: %s\n", msg.TopicPartition, string(msg.Value))
} else {
// The client will automatically try to recover from all errors.
fmt.Printf("Consumer error: %v (%v)\n", err, msg)
}
}
c.Close()
}
If you use docker to build: follow this comment to add suitable packages
For Debian and Ubuntu based distros, install librdkafka-dev from the standard repositories or using Confluent's Deb repository.
For Redhat based distros, install librdkafka-devel using Confluent's YUM repository.
For MacOS X, install librdkafka from Homebrew. You may also need to brew install pkg-config if you don't already have it. brew install librdkafka pkg-config.
For Alpine: apk add librdkafka-dev pkgconf
confluent-kafka-go is not supported on Windows.
With Alpine, please remember that install the community version, because it cannot install librdkafka with max version 1.1.0 (not use Alpine community version)
Good luck!

Not sure, if it is possible to use with curl with kafka. But you can use the kafka-console-consumer.
kafka-console-consumer.bat --bootstrap-server 192.168.9.100:9092 --topic mykafka --from-beginning

I'v found the reason.
because the kafka's settings is not correct
this is server.properties:
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
if the listeners is not set , kafka will only receive request from java.net.InetAddress.getCanonicalHostName() which means localhost
so i shuld set :
listeners = PLAINTEXT://0.0.0.0:9092
this will work

Related

How to use the Docker golang client package to connect over TCP?

I am prototyping a Go application that will ultimately talk to a remote Docker host. To this end I am using Docker's Go client package (docs at https://godoc.org/github.com/docker/docker/client).
The environment is Ubuntu 19.10 in VirtualBox 6.1.4, using Docker 19.03.6, and Go 1.14. All Go packages have been installed with go get in the last 72 hours.
For local testing purposes I am trying to connect to a local Docker host at tcp://0.0.0.0:2375. That is, I am running
sudo dockerd -H tcp://0.0.0.0:2375
With this, commands such as
docker -H tcp://0.0.0.0:2375 ps
and
curl -k -v -i http://0.0.0.0:2375/v1.40/containers/json
both work, and I am able to observe the traffic over port 2375 with Wireshark.
However, attempting to do the same thing through the Go client package fails with
Cannot connect to the Docker daemon at tcp://0.0.0.0:2375. Is the docker daemon running?
and nothing shows up in Wireshark.
Here is the example Go code :
package main
import (
"context"
"fmt"
"github.com/docker/docker/api/types"
"github.com/docker/docker/client"
)
func main() {
cli, err := client.NewClientWithOpts(client.WithHost("tcp://0.0.0.0:2375"), client.WithAPIVersionNegotiation())
if err != nil {
fmt.Println("error: could not create docker client handle")
fmt.Println(err)
}
options := types.ContainerListOptions{}
data, err := cli.ContainerList(context.Background(), options)
if err != nil {
fmt.Println("error: could not request containers list")
fmt.Println(err)
} else {
fmt.Println(data)
}
}
Attempts to set the environment variables DOCKER_HOST=tcp://0.0.0.0:2375, DOCKER_CERT_PATH=, and DOCKER_TLS_VERIFY=, then configure the client handle through client.FromEnv() also failed in the exact same way.
What am I doing wrong here ?

How to know if load balancing works in Docker Swarm?

I created a service called accountservice and replicated it 3 times after. In my service I get IP address of the producing service instance and populate it in JSON response. The question is everytime I run curl $manager-ip:6767/accounts/10000 the returned IP is the same as before (I tried 100 times)
manager-ip environment variable:
set -x manager-ip (docker-machine ip swarm-manager-1)
Here's my Dockerfile:
FROM iron/base
EXPOSE 6767
ADD accountservice-linux-amd64 /
ADD healthchecker-linux-amd64 /
HEALTHCHECK --interval=3s --timeout=3s CMD ["./healthchecker-linux-amd64", "-port=6767"] || exit 1
ENTRYPOINT ["./accountservice-linux-amd64"]
And here's my automation script to build and run service:
#!/usr/bin/env fish
set -x GOOS linux
set -x CGO_ENABLED 0
set -x GOBIN ""
eval (docker-machine env swarm-manager-1)
go get
go build -o accountservice-linux-amd64 .
pushd ./healthchecker
go get
go build -o ../healthchecker-linux-amd64 .
popd
docker build -t azbshiri/accountservice .
docker service rm accountservice
docker service create \
--name accountservice \
--network my_network \
--replicas=1 \
-p 6767:6767 \
-p 6767:6767/udp \
azbshiri/accountservice
And here's the function I call to get the IP:
package common
import "net"
func GetIP() string {
addrs, err := net.InterfaceAddrs()
if err != nil {
return "error"
}
for _, addr := range addrs {
ipnet, ok := addr.(*net.IPNet)
if ok && !ipnet.IP.IsLoopback() {
if ipnet.IP.To4() != nil {
return ipnet.IP.String()
}
}
}
panic("Unable to determine local IP address (non loopback). Exiting.")
}
And I scale the service using the command below:
docker service scale accountservice=3
A few things:
Your results are normal. By default, a Swarm service has a VIP (virtual IP) in front of the service tasks to act as a load balancer. Trying to reach that service from inside the virtual network will only show that IP.
If you want to use a round-robin approach and skip the VIP, you could create a service with --endpoint-mode=dnsrr that would then return a different service task for each DNS request (but your client might be caching DNS names, causing that to show the same IP, which is why VIP is usually better).
If you wanted to get a list of IP's for task replicas, do a dig tasks.<servicename> inside the service's network.
If you wanted to test something easy, have your service create a random string, or use hostname on startup and return that so you can tell the different replicas when accessing. A easy example is to run one service using image elasticsearch:2 which will return JSON on port 9200 with a different random name per container.

kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs

I start a docker container to run a Kafka server with
docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=192.168.99.100 --env ADVERTISED_PORT=9092 spotify/kafka
I find the IP address of the Docker container. This is 172.17.0.2 and I can ping this address.
Now I want a producer that sends messages:
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='172.17.0.2:9092')
for i in range(100):
producer.send('foobar', b'hola')
producer.close()
However this gives:
kafka.errors.KafkaTimeoutError: KafkaTimeoutError: Failed to update metadata after 60.0 secs.
How to solve this?
Had the same error but because my topic name wasn't right/set, same as python_noob.

Can't run Go (lang) app from docker image on docker-machine (Virtual Box)

I have a very simple application. Here is the code:
package main
import (
"fmt"
"math/rand"
"time"
"net/http"
"encoding/base64"
"encoding/json"
)
type Message struct {
Text string `json:"text"`
}
var cookieQuotes = []string{
// Skipped all the stuff
}
const COOKIE_NAME = "your_cookie"
func main() {
http.HandleFunc("/set_cookie", setCookie)
http.HandleFunc("/get_cookie", getCookie)
http.Handle("/favicon.ico", http.NotFoundHandler())
http.ListenAndServe(":8080", nil)
}
func setCookie(w http.ResponseWriter, r *http.Request) {
quote := getRandomCookieQuote()
encQuote := base64.StdEncoding.EncodeToString([]byte(quote))
http.SetCookie(w, &http.Cookie{
Name: COOKIE_NAME,
Value: encQuote,
})
}
func getCookie(w http.ResponseWriter, r *http.Request) {
cookie, err := r.Cookie(COOKIE_NAME)
if err != nil {
fmt.Fprintln(w, "Cannot get the cookie")
}
message, _ := base64.StdEncoding.DecodeString(cookie.Value)
msg := Message{Text:string(message)}
fmt.Println(msg.Text)
respBody, err := json.Marshal(msg)
fmt.Println(string(respBody))
if err != nil {
fmt.Println("Cannot marshall JSON")
}
w.Header().Set("Content-Type", "application/json")
fmt.Fprintln(w, string(respBody))
}
func getRandomCookieQuote() string {
source := rand.NewSource(time.Now().UnixNano())
random := rand.New(source)
i := random.Intn(len(cookieQuotes))
return cookieQuotes[i]
}
It was tested locally, and, also I've tried to run a docker container with it on my machine (Ubuntu) and it was working perfectly. But I want to run it on Virtual Machine (I use Oracle Virtual Box).
So, I have installed docker-machine:
docker-machine version 0.12.2, build 9371605
After that, I've switched to it, like it was recommended in official documentation like this:
eval "$(docker-machine env default)"
So I can do now from a perspective of that machine.
Also I've tried to run ngnix from the documentation example:
docker run -d -p 8000:80 nginx
curl $(docker-machine ip default):8000
And I get the result, I can get to ngnix welcome page by accessing my docker machine ip-address which could be accessed by command:
docker-machine ip default
But when I try to run my own docker image, I could not do this. When I try to access it, I get:
curl $(docker-machine ip default):8080
curl: (7) Failed to connect to 192.168.99.100 port 8080: Connection refused
Also I've tried to skip a port, to add protocol (http, and even https for the sake of luck) - nothing works.
Maybe, something wrong with my Dockerfile?
# Go experiments with cookies
FROM golang:1.8-onbuild
MAINTAINER vasyania2#gmail.com
Could you help me please?
This command maps port 8080 from your docker host to port 80 of your container:
docker run -d -p 8080:80 cookie-app
This instruction tells your go application to listen on port 8080, inside the container:
http.ListenAndServe(":8080", nil)
You have a port mismatch in those above lines, your application is not listening on the port you are forwarding to.
To connect to port 8080 of your container, you can run the following:
docker run -d -p 8080:8080 cookie-app

Cant produce-to OR consume-from kafka broker running inside a container

Setting Up
I am using the confluent/kafka images from docker hub to start the zookeeper and the kafka instances in two separate containers. The commands I have used to start the containers are as follows:
docker run --rm --name zookeeper -p 2181:2181 confluent/zookeeper
docker run --rm --name kafka -p 9092:9092 --link zookeeper:zookeeper confluent/kafka
And I have two containers zookeeper and kafka running now.
Note that I have mapped ports 2181 and 9092 of the containers to my host machine ports. I verified that this mapping is working by just trying localhost:2181/9092 in my browser and I get some errors printed in my running containers' terminals.
Then I created topic by issuing the following command in my host machine:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
This is successful and I verified it by listing the topics with the following command:
./bin/kafka-topics.sh --list --zookeeper localhost:2181
Now the ISSUE:
I am trying to produce some messages to the broker with the following command:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
I am getting the following exception:
[2017-03-02 20:36:02,376] WARN Failed to send producer request with correlation id 2 to broker 0 with data for partitions [test,0] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:594)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
I read some threads on the internet that suggested that I update my hosts file. If so what entry do I have to put in my hosts file??
Also some threads suggested I set the ADVERTISED_HOST entry to correct IP in the configuration file. Which configuration file??? Where do I make the update?
If it's the server.properties file used for the kafka broker then I did try going into the container created by the confluent/kafka image. It looks like this:
socket.send.buffer.bytes=102400
delete.topic.enable=true
socket.request.max.bytes=104857600
log.cleaner.enable=true
log.retention.check.interval.ms=300000
log.retention.hours=168
num.io.threads=8
broker.id=0
log4j.opts=-Dlog4j.configuration\=file\:/etc/kafka/log4j.properties
log.dirs=/var/lib/kafka
auto.create.topics.enable=true
num.network.threads=3
socket.receive.buffer.bytes=102400
log.segment.bytes=1073741824
num.recovery.threads.per.data.dir=1
num.partitions=1
zookeeper.connection.timeout.ms=6000
zookeeper.connect=zookeeper\:2181
Any suggestions how I can overcome this and resolve producing and consuming from the kafka containers possible from my host machine??
Thanks Alot!!!
I was able to figure it out within seconds of posting this question.
I had to get the HOSTNAME of the container in which the broker was running by issuing:
echo $HOSTNAME
And I updated my /etc/hosts file in my host machine with the loopback entry:
127.0.0.1 KAFKA_CONTAINER_HOSTNAME
127.0.0.1 ZOOKEEPER_CONTAINER_HOSTNAME
Had to do the same with the zookeeper container in order for the consumer also to work without an issue.
Cheers!!!

Resources