How do I dynamically add container ip in other Dockerfile ( I am running two container a) Redis b) java application .
I need to pass redis url on run time to my java arguments
Currently I am manually checking the redis ip and copying it in Dockerfile. and later creating new image using redis ip for java application.
docker run --name my-redis -d redis
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-redis
IN Dockerfile (java application)
CMD ["-Dspring.redis.host=172.17.0.2", "-jar", "/apps/some-0.0.1-SNAPSHOT.jar"]
Can I use any script to update the DockerFile or can use any environment variable.
you can assign a static ip address to your dokcer container when you run it, following the steps:
1 - create custom network:
docker network create --subnet=172.17.0.0/16 redis-net
2 - run the redis container to use the specified network, and assign the ip address:
docker run --net redis-net --ip 172.17.0.2 --name my-redis -d redis
by then you have the static ip address 172.17.0.2 for my-redis container, you don't need to inspect it anymore.
3 - now it is possible to run the java appication container but it must use the same network:
docker run --net redis-net my-java-app
of course you can optimize the solution, by using env variables or whatever you find convenient to your setup.
More infos can be found in the official docs (search for --ip):
docker run
docker network
Edit (add docker-compose):
I just find out that it is also possible to assign static ips using docker-compose, and this answer gives an example how.
This is a similar example just in case:
version: '3'
services:
redis:
container_name: redis
image: redis:latest
restart: always
networks:
vpcbr:
ipv4_address: 172.17.0.2
java-app:
container_name: java-app
build: <path to Dockerfile>
networks:
vpcbr:
ipv4_address: 172.17.0.3
depends_on:
- redis
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 172.17.0.0/16
gateway: 172.17.0.1
official docs: https://docs.docker.com/compose/networking/
hope this helps you find your way.
You should add your containers in the same network . Then at runtime you can use that name to refer to the container with its name. Container's name is the host name in the network. Thus at runtime it will be resolved as container's ip address.
Follow these steps:
First, create a network for the containers:
docker network create my-network
Start redis: docker run -d --network=my-network --name=redis redis
Edit java application's Dockerfile, replace -Dspring.redis.host=172.17.0.2" with -Dspring.redis.host=redis" and build again.
Finally start java application container: docker run -it --network=my-network your_image. Optionally you can define a name for the container, but it is not required as you do not access java application's container from redis container.
Alternatively you can use a docker-compose file. By default docker-compose creates a network for running services. I am not aware of your full setup, so I will provide a sample docker-compose.yml that illustrates the main concept.
version: "3.7"
services:
redis:
image: redis
java_app_image:
image: your_image_name
In both ways, you are able to access redis container from java application dynamically using container's hostname instead of providing a static ip.
Related
I have a Java application running in a Docker container and rabbitmq in another container.
How can I connect the containers to use rabbitmq in my Java application?
You have to set up a network and attach the running containers to the network.
Then you have to set the connection URL of your app to the name of the rabbitmq's network name in Docker container.
The easiest way is to create docker-compose file because it will create the network and attach the containers automatically.
Create a network
Connect the container
Or
Docker compose file
Example of docker-compose.yml
version: '3.7'
services:
yourapp:
image: image_from_dockerhub_or_local // or use "build: ./myapp_folder_below_this_where_is_the_Dockerfile" to build container from scratch
hostname: myapp
ports:
- 8080:8080
rabbitmq:
image: rabbitmq:3.8.3-management-alpine
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: pass
ports:
- 5672:5672
- 15672:15672
You can run it with docker-compose up command.
Then in your connection url use host:rabbitmq, port:5672.
Note that you don't have to create a port forward if you don't want to reach rabbitmq from your host machine.
I have 2 docker images, one for my backend and one for a mock database. I want to spin up these two images separately and link the backend to the database. To do this I have a connection string in my backend like so Data Source=192.168.99.100;Catalog=DB name;Integrated Security=True;MultipleActiveResultSets=True"; where 192.168.99.100 is the IP of my default Docker machine where the database container is running. So on my Windows machine this works perfectly and the backend container can communicate with the database which is running on another container. However, when some of my colleagues who use Mac and Linux use the same images they can't get the link to work because they obviously don't have the same IP for their Docker machine.
Is there any way to reference the database in the connection string so that it is the same no matter where it is running? For example use the name of the database container, instead of the IP or something similar?
You can also do this using plain docker. Basically you just need to create a bridge network, and then attach both containers to it.
Eg:
docker network create --driver=bridge mynetwork
docker run --network=mynetwork --name mydb mydb:latest
docker run --network=mynetwork --name myapp myapp:latest
Then inside the myapp container you can reference the database container using the hostname mydb (same as with docker-compose). You can still expose ports in the myapp container to your host using -p 3000:3000, etc
Further reading: https://docs.docker.com/network/bridge/
You can use docker-compose services to achieve what you are looking for. Here is a simplified example docker-compose.yml file:
version: "3.5"
services:
db:
container_name: mock_db
restart: "no"
build: ./mock_db
expose:
- 5432 (or whatever your port is)
env_file: .env
command: your-command
server:
container_name: my_server
build: ./server
env_file: .env
ports:
- "8443:8443"
command: your-command
You can then reference the service name (in this case db) as the ip/url part of your connection string.
You can read more about docker-compose configuration options here
I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379
I'm trying to get the container ID of the running container at the startup time. This is to use that information in service health check apis.
I got a loadbalancer sitting in front of a fleet of containers, and runs periodic health checks via https://service-n.api.com/health. Idea is to return the container information with the api responses.
I'm using docker-compose to spinup docker containers, it'd be great if there's a way to pass the container id as environment variable to the container, like below.
version: '2'
services:
web:
image: my.registry.com/pq-api:1.0.0
container_name: my-container
ports:
- 80:80
- 443:443
network_mode: bridge
environment:
CONTAINER_ID: "{{.ID}}"
The container Id is already available by default to all containers inside the environment variable HOSTNAME
$ docker run alpine env
HOSTNAME=....
....
Is there a way to permanently set a hostname and IP to a container in docker?
I want to create a stack of machines (containers) in one VM ideally talking to one another with hostname.
You can use the new networking feature available after Docker version 1.10.0
That allows you to connect to containers by their name, assign Ip addrees and host names.
When you create a new network, any container connected to that network can reach other containers by their name, ip or host-names.
i.e:
1) Create network
$ docker network create --subnet=172.18.0.0/16 mynet123
2) Create container inside the network
$ docker run --net mynet123 -h myhostname --ip 172.18.0.22 -it ubuntu bash
Flags:
--net connect a container to a network
--ip to specify IPv4 address
-h, --hostname to specify a hostname
--add-host to add more entries to /etc/hosts
You can use docker-compose tool to create a stack of containers with specific hostnames and addresses.
Here is the example docker-compose.yml with specific network config:
version: "2"
services:
host1:
networks:
mynet:
ipv4_address: 172.25.0.101
networks:
mynet:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/24
Source: Docker Compose static IP address in docker-compose.yml.
And here is the example of docker-compose.yml file with containers pinging each other:
version: '3'
services:
ubuntu01:
image: bash
hostname: ubuntu01
command: ping -c1 ubuntu02
ubuntu02:
image: bash
hostname: ubuntu02
command: ping -c1 ubuntu01
Run with docker-compose up.