I want to spin up a database container (e.g: MongoDb) with docker-compose so that I can run some tests against the database.
This is my mongodb.yml docker-compose file.
version: '3.7'
services:
mongo:
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=example
mongo-express:
image: mongo-express:latest
restart: always
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=example
depends_on:
- mongo
When I run it with docker-compose -f mongodb.yml up I can successfully connect to MongoDb on localhost. In other words, the following connection string is valid: "mongodb://root:example#localhost:27017/admin?ssl=false"
I want to use the equivalent to alias so that instead localhost, MongoDb is accessible through hostname potato
In GitLab CI/CD, with a Docker runner, I can spin up a mongo container and provide an alias without any issue. Like this:
my_ci_cd_job:
stage: my_stage
services:
- name: mongo:latest
alias: potato
variables:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
This allows me to connect within the GitLab CI/CD with "mongodb://root:example#potato:27017/admin?ssl=false"
I need the same locally, so that I can reuse my connection string.
I've added under image: mongo:latest the following
container_name: potato
But I cannot connect to the server potato. I've tried a few combinations with network, alias, etc. No luck. I don't event understand what I am doing anymore. Isn't there a simple equivalent to give an alias to a container so that my C# app or MongoDb client can access it?
Even the documentation, at the section https://docs.docker.com/compose/compose-file/#external_links is useless in my opinion. It mentions random samples not defined elsewhere.
Any help is much appreciated!
I've tried..
I've tried the following: How do I set hostname in docker-compose? without success
I have spent a few hours reading the docker compose documentation and it's extremly confusing. The fact that most of the questions are out of context without specific examples, does not help either, because it requires some deeper knowledge.
SOLUTION
Thanks to the replies it's clear this is a hacky thing not really recommended.
I went with the recommendations and I have now the connection string as:
mongodb://root:example#potato:27017/admin?ssl=false
by default. That way, I don't need to change anything for my GitLab CI/CD pipeline that has the alias potato for mongo db.
When running locally the mongoDb container with docker compose, it does so in localhost, but I edited my hosts file (e.g: in Windows C:\Windows\System32\drivers\etc, in Linux /etc/hosts) to make any request to potato go to localhost or 127.0.0.1
127.0.0.1 potato
And that way I can connect to MongoDb from my code or Robo3T as if it was running on a host called potato.
Feel free to comment any better alternative if there is. Ta.
If I understand you correctly, you want to bind a hostname (e.g., potato) to an ip address on your host machine.
Afaik this is not possible, but there are workarounds[1].
Everytime you start your docker-compose a network is used between those containers, and there is no way for you to be sure which ip addresses they will get. These could be 172.17.0.0/24 or 172.14.0.0/24 or anything else really.
The only thing you know for sure is that your host will have a service running at port 27017. So you could say that the hostname potato points to localhost on your hostmachine by adding 127.0.0.1 potato to your /etc/hosts file on your host.
That way the connection string "mongodb://root:example#localhost:27017/admin?ssl=false" will point to the local port from the perspective of your host machine, while it will point to the docker container from the perspective of the rest of your docker-compose services.
I do have to say that I find this a hacky approach, and as #DavidMaze said, it's normal to need different connection strings depending on the context you use them in.
[1] https://github.com/dlemphers/docker-local-hosts
Related
I've currently ran into a problem i'm trying to solve for more than a week and i'm getting nowhere. I hope you can point me into the right direction.
Initial Situation
Description
The project i am building is a NestJS App which connects to some APIs. Internally, it uses bullmq as a message queue, which itself uses ioredis to connect to a redis database. I've connected my self-written server component as well as the redis (which uses docker) via docker-compose up with the following configuration:
version: '3'
services:
server:
image: myserver:1.4.0
container_name: myserver
depends_on:
- db
ports:
- 3001:3000
environment:
- REDIS_HOST=redis
db:
image: redis:6.0.8
container_name: redis
ports:
- 6379:6379
Versions
Workstation
Docker version 19.03.13, build 4484c46d9d
docker-compose version 1.27.4, build 40524192
Server Component
bullmq 1.9.0
ioredis 4.17.3
redis docker
6.0.8
Problem
The problem of my server-component is, that it tries to connect to the redis instance under the given REDIS_HOST at port 6379 using the following code:
readonly connection = new Redis(
+(process.env.REDIS_PORT ?? this.configService.get('redis_port')),
process.env.REDIS_HOST ?? this.configService.get('redis_host'),
);
but throws the following error:
[ioredis] Unhandled error event: Error: connect ECONNREFUSED 127.0.0.1:6379
I expected it to just see the redis instance at the exposed port.
So, it doesn't see the redis instance at 127.0.0.1: but shouldn't it use the given ip?
Thing i checked
The server code is correct, the REDIS_HOST is correctly submitted and called in the ioredis call. So further digging inside ioredis i found this issue . So, it should be available given all the hints as locally on my workstation, i'm using 0.0.0.0:6379 to connect and it works just fine.
Docker compose does create a network bridge automatically, and using netcat i checked, port 6379 on the ip of the redis docker (as well as the aliases redis & db), the redis instance is available from the server dockers console.
I then explicitely set the subnet using the network configuration of docker-compose as well as giving the containers static ips, but as i already described: the ip is correctly resolved.
I found the following issue on the docker github issue 204. I think this is exactly the problem i am facing here, but how does one solve it?
tl;dr ioredis tries to connect to the correctly resolved ip of the redis instance, but fails, as the instance is not available on the local ip of the server component.
What my current state is
I sob uncontrollably.
I currently am out of ideas how to get the "myserver"-container to connect to the redis instance via ioredis. My point of view is, that the problem i am having has to be connected to the way docker on windows resolves ips to 127.0.0.1. .
Is my point of view right?
What other way can you suggest to try out?
Best regards & thanks in advance.
Edit (2020-11-27): After some digging and further investigating the suggestions of Jeffrey Mixon, I'm unfortuately not any closer to a solution. My last steps included:
updating all dependencies (among others bullmq to v1.11, ioredis to 4.19.2). This did not change anything.
I then found a relatively new post on the bullmq issue board of a similar problem and i switched from reusing the connection in a connection object like shown above to always creating a new connection, as its also explained in the bullmq docs. But this also did not help.
new Queue(name, {
connection: {
host: this.redisHost,
port: this.redisPort,
},
})
I then switched from using the 'Redis' Object from the IORedis library to the 'IORedis', but still: nothing changed in the habit of my application docker. Even though the Redis connection is correctly called with the redis host, it still tries to connect to 127.0.0.1:6379 as shown above.
Lastly, the strange behavior, that if e.g i choose an unsolvable host url, the application docker correctly tries to connect to tthat unresolveable host). But as soon, as this host is available in the network of docker-compose, it uses 127.0.0.1.
Edit (2020-12-01):
In the meantime, i checked on a clean linux machine if the problem could by happen only on docker-for-windows, but it does happen on linux as well.
I did not solve the problem itself, but i bypassed it for me by just putting everything inside one docker. As my application is more of a proof of concept, there is no big pain in doing so. I would leave the question open if there happens to be a solution in the future or more people having the same question.
For those wondering, my dockerfile including redis now stacks another layer on top of a redis image. I'm adding the parts prom the ng-cli-e2e image i used before. So in the beginning of my existing dockerfile i added:
FROM redis:6.0.9-buster
RUN apt update
RUN apt install nodejs -y
RUN apt install npm -y
In the end i created a small wrapper script which justs starts the redis server as well as my application. I'm also exposing two ports now, if i want to access everything from my machine.
EXPOSE 3000 6379
CMD ./docker-start-wrapper.sh
It's not the most beautiful solution, but it does work for the moment.
The problem is that your application container is using localhost as the hostname for connecting to the redis container. It should be using the hostname redis in this case.
Consider the following demonstration:
version: '3.7'
services:
server:
image: busybox
container_name: myserver
entrypoint: /bin/sh
# if redis is replaced by localhost or 127.0.0.1 below, this container will fail
command: "-c \"sleep 2 && nc -v redis 6379\""
depends_on:
- db
ports:
- 3001:3000
environment:
- REDIS_HOST=redis
db:
image: busybox
container_name: redis
entrypoint: /bin/nc
command: "-l -p 6379 -v"
# is not necessary to publish these ports
#ports:
# - 6379:6379
$ docker-compose -f scratch_2.yml up
Creating network "scratches_default" with the default driver
Creating redis ... done
Creating myserver ... done
Attaching to redis, myserver
redis | listening on [::]:6379 ...
myserver | redis (172.25.0.2:6379) open
redis | connect to [::ffff:172.25.0.2]:6379 from [::ffff:172.25.0.3]:34821 ([::ffff:172.25.0.3]:34821)
myserver exited with code 0
redis exited with code 0
When you publish ports, they are for use outside the containers on the host. By attempting to connect your mysever container to 127.0.0.1, the container is simply attempting to connect to itself.
The problem with docker-compose is that redis is not on localhost, but it is on its own net instead. By default, all the containers in a docker compose share the same default net, so your redis container should be available by all the other containers in that docker-compose with the host redis (or your container name, in your case db).
Another point to remark is that if you are using bullmq, not only the Queue options need a custom collection, but also any Worker or QueueScheduler that you use, so you shall pass the custom connection options also to them.
I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default
I want to run a webapp and a db using Docker, is there any way to connect 2 dockers(webApp Docker Container in One Machine and DB Docker container in another Machine) using docker-compose file without docker-swarm-mode
I mean 2 separate server
This is my Mongodb docker-compose file
version: '2'
services:
mongodb_container:
image: mongo:latest
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
Here is my demowebapp docker-compose file
version: '2'
services:
demowebapp:
image: demoapp:latest
restart: unless-stopped
volumes:
- ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost
- MONGO_URL=mongodb://35.168.21.133/demodb
ports:
- 3000:3000
Can any one suggest me How to do
Using only one docker-compose.yml with compose version: 2 there is no way to deploy 2 services on two different machines. That's what version: 3 using a stack.yml and swarm-mode are used for.
You can however deploy to two different machines using two docker-compose.yml version 2, but will have to connect them using different hostnames/ips than the service-name from the compose-file.
You shouldn't need to change anything in the sample files you show: you have to connect to the other host's IP address (or DNS name) and the published ports:.
Once you're on a different machine (or in a different VM) none of the details around Docker are visible any more. From the point of view of the system running the Web application, the first system is running MongoDB on port 27017; it might be running on bare metal, or in a container, or port-forwarded from a VM, or using something like HAProxy to pass through from another system; there's literally no way to tell.
The configuration you have to connect to the first server's IP address will work. I'd set up a DNS system if you don't already have one (BIND, AWS Route 53, ...) to avoid needing to hard-code the IP address. You also might look at a service-discovery system (I have had good luck with Hashicorp's Consul in the past) which can send you to "the host system running MongoDB" without needing to know which one that is.
Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.
/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.