On my linux server, I am running 3 images -
A) Docker and Zookeeper with this docker-compose file -
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:2.11-2.0.0
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
This will open up the kafka broker to the host machine.
B) JupyterHub
docker run -v /notebooks:/notebooks -p 8000:8000 jupyterhub
C) Confluent Schema Registry (I have not tried it yet, but in my final setup I will have a schema registry container as well)
docker run confluentinc/cp-schema-registry
Both are starting up without any issues. But how do I open up jupyterhub container to kafka container and schema registry ports so that my python scripts can access the brokers.
I'm assuming you want to run your jupyter notebook container on demand whereas your zookeeper and kafka containers will always be running separately? You can create a docker network and join all the containers to this network. Then your containers will be able resolve each other by their names.
Create a network
Specify this network in compose file
When starting your other containers with docker run, use --network option.
If you run docker network ls then you can find the name of the network that Compose creates for you; it will be named something like directoryname_default. You can then launch the containers connected to that network,
docker run --net directoryname_default confluentinc/cp-schema-registry
If you can include these files in the same docker-compose.yml file then you won’t need to do anything special. In particular this probably makes sense for the Confluent schema registry, which you can consider a core part of the Kafka stack if you’re using Avro messages.
You can use the Docker Compose service name kafka as a host name here, but since you need to connect to the “inside” listener you’ll need to configure a non-default port 9093. (The Docker Compose expose: directive doesn’t do much and you can safely delete it.)
Related
I started mysqldb from a docker container . I was surprised that I could connect it via the localhost using the below command
mysql -uroot -proot -P3306 -h localhost
I thought the docker containers that start on the bridge network and wont be available outside that network. How is that mysql CLI is able to connect to this instance
Below is my docker compose that runs the mysqldb-docker instance
version: '3.8'
services:
mysqldb-docker:
image: 'mysql:8.0.27'
restart: 'unless-stopped'
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-dev
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3306/reco-tracker-dev"
depends_on: [mysqldb-docker]
env_file:
- ./.env
volumes:
mysqldb:
You have published the port(s). That means you can reach them on the host system on the published port.
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
The critical section in your config is the below. You have added a ports key to your service. This is composes way to publish ports. The left part is the port where you publish it to on the host system. The right part is where the container actually listens on.
ports:
- "3306:3306"
Also keep in mind that when you start compose, a default network is created that joins all container in the compose stack. That's why These containers can find each other, with the service name and/or container name as hostname.
You don't need to publish the port(s) like you did in order for them to be able to communicate. I guess that's why you did it. You can and probably should remove any port mapping from internal services, if possible. This will add extra security to your setup, because then it behaves like you describe. Only containers in the same network find each other.
I have one application running on http://home.local:8180 in container A. And the other container B is running on http://data.local:9010. Container B is using container A to hit the API. If I specify container A hostname as http://host.docker.internal:8180 in container B then it works. What I would have to do if I want to use the hostname as is (home.local:8180)
Following is the docker-compose file:
home_app:
hostname: "home.local"
image: "home-app"
ports:
- "8180:8080"
environment:
data_app:
hostname: "data.local"
image: "data-app"
links:
- "home_app"
ports:
- "9010:9010"
Just use "home.local:8080". 8180 is just on the host machine and forwards to 8080 on the container, whereas based on your docker-compose, 8080 is the port of your application on home_app container, so within the docker-compose network, other containers should be able to access it via hostname (home.local) and the actual ports (8080).
You need to configure your application to use the Compose service name home_app as a host name, and the port number that the process inside the container is using. Neither hostname: nor ports: has any effect on connections between containers. You don't need to (and can't) specify a custom DNS suffix. See Networking in Compose in the Docker documentation for additional details.
So I might specify:
version: '3.8'
services:
home_app:
image: "home-app"
ports:
- "8180:8080" # optional, only for access from outside Docker
data_app:
image: "data-app"
ports:
- "9010:9010"
environment:
HOME_APP_URL: 'http://home_app:8080'
You don't need hostname:, which only affects what a container thinks its own hostname is and has no effect on anything outside the container; and you don't need links:, which is an obsolete option from first-generation Docker networking.
I have 2 docker images, one for my backend and one for a mock database. I want to spin up these two images separately and link the backend to the database. To do this I have a connection string in my backend like so Data Source=192.168.99.100;Catalog=DB name;Integrated Security=True;MultipleActiveResultSets=True"; where 192.168.99.100 is the IP of my default Docker machine where the database container is running. So on my Windows machine this works perfectly and the backend container can communicate with the database which is running on another container. However, when some of my colleagues who use Mac and Linux use the same images they can't get the link to work because they obviously don't have the same IP for their Docker machine.
Is there any way to reference the database in the connection string so that it is the same no matter where it is running? For example use the name of the database container, instead of the IP or something similar?
You can also do this using plain docker. Basically you just need to create a bridge network, and then attach both containers to it.
Eg:
docker network create --driver=bridge mynetwork
docker run --network=mynetwork --name mydb mydb:latest
docker run --network=mynetwork --name myapp myapp:latest
Then inside the myapp container you can reference the database container using the hostname mydb (same as with docker-compose). You can still expose ports in the myapp container to your host using -p 3000:3000, etc
Further reading: https://docs.docker.com/network/bridge/
You can use docker-compose services to achieve what you are looking for. Here is a simplified example docker-compose.yml file:
version: "3.5"
services:
db:
container_name: mock_db
restart: "no"
build: ./mock_db
expose:
- 5432 (or whatever your port is)
env_file: .env
command: your-command
server:
container_name: my_server
build: ./server
env_file: .env
ports:
- "8443:8443"
command: your-command
You can then reference the service name (in this case db) as the ip/url part of your connection string.
You can read more about docker-compose configuration options here
I'm trying to use docker-compose to run continuous integration tests on a Jenkins server.
Here is my docker-compose.yml:
version: '3'
services:
elasticsearch:
container_name: elasticsearch_${INSTANCE}
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.2
ports:
- 9200:9200
- 9300:9300
command: elasticsearch -E transport.host=0.0.0.0
environment:
ES_JAVA_OPTS: "-Xms2g -Xmx2g"
discovery-type: single-node
mainapp:
container_name: mainapp_${INSTANCE}
image: testbot:${INSTANCE}
environment:
ES_ADDRESS: http://elasticsearch_${INSTANCE}:9200
SUBSET: ${SUBSET}
DIRECTORY: ${DIRECTORY}
INSTANCE: ${INSTANCE}
TEST_CMD: ${TEST_CMD}
command: /bin/bash /mainapp/build/tests/wrapper.sh
This works great, but when I try to run multiple tests at the same time, the previously running test exits with code 137 immediately. I think this is because the services are binding to the host network, and I can't do that with multiple containers.
For my purposes, the two services that are started only need to communicate with each other, not with the host at all. I'm a bit confused with exactly how to network this.
You can do this by specifying a different project name using the COMPOSE_PROJECT_NAME environment variable or the --project-name flag for docker-compose. All services, networks, and volumes are created and named per-project.
You can drop the ports property.
If you wish, you can use the expose property instead (and then you only need to describe the container port, e.g. expose: - 9200) but expose is purely documentary and is not functionally required.
The ports property defines ports that will be exposed on the host.
If you don't want|need ports exposed on the host, you don't need it.
I have a docker-compose file with several service-container definitions. One of the services communicates with Apache Kafka within the same docker-compose run.
So I have the kafka docker definition like this:
kafka:
image: spotify/kafka
ports:
- "2181:2181"
- "9092:9092"
environment:
ADVERTISED_HOST: 127.0.0.1
ADVERTISED_PORT: 9092
I have my service definition in the same docker-compose file. In the startup script of the service I have to figure out somehow the IP address of the Kafka instance.
I know, I can use something like docker inspect to find out which IP address is used by a container.
But how can I do it dynamically in a docker-compose environment?
EDIT
So, the right configuration should be (thank you, #nwinkler):
kafka:
image: spotify/kafka
ports:
- "2181:2181"
- "9092:9092"
environment:
ADVERTISED_HOST: kafka
ADVERTISED_PORT: 9092
myservice:
image: foo
links:
- kafka:kafka
Don't forget to set the ADVERTISED_HOST to kafka (or how you named your kafka container within docker-compose).
You can use the Docker Compose Links feature for this. If you provide a link to the kafka container from your other container, Docker Compose will ensure that your other container can access the Kafka container through its hostname - you will not have to know its IP address.
Example:
kafka:
image: spotify/kafka
ports:
- "2181:2181"
- "9092:9092"
environment:
ADVERTISED_HOST: 127.0.0.1
ADVERTISED_PORT: 9092
myservice:
image: foo
links:
- kafka:kafka
This will allow your myservice container to access the Kafka container through the kafka hostname. So from your myservice container, you can do something like curl http://kafka:9092 to access the service on the Kafka container.
Docker-Compose does this through DNS, it creates a hostname/IP mapping in your container allowing you to access the container without knowing its IP address.
The ip of your container will be the ip you are looking for.
Append the port number (9092 in your case) to the ip of the container to get whatever kafka is serving.