As the title states, running Docker ps yields my container but double the ports. I'd like some clarification for what is going on. I'm attempting to bind a ZMQ socket to localhost:25565 which then is used inside the Docker container but I can't seem to connect to that either.
Ports:
13370/tcp -> 0.0.0.0:25566
13370/tcp -> :::25566 <-- What?
13371/tcp -> 0.0.0.0:25565
13371/tcp -> :::25565 <-- What?
13372/tcp -> 0.0.0.0:25564
13372/tcp -> :::25564 <-- What?
Docker-compose:
version: '3.7'
services:
test:
build: ./test
container_name: test
# network_mode: "host"
ports:
- "25566:13370"
- "25565:13371"
- "25564:13372"
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
Dockerfile (Omitted all lines besides proving EXPOSE is commented out)
# EXPOSE 25566:13370
# EXPOSE 25565:13371
# EXPOSE 25564:13372
Related
I have a docker-compose.yml:
services:
backend:
build:
./backend
ports:
- 8100:8100
container_name: "backend"
frontend:
build:
./frontend
ports:
- 4200:4200
container_name: "frontend"
depends_on:
- backend
And i want to get rid of the ports part. I have .env files in the folders /backend and /frontend with the portnumber set in there (e.g PORT=8100). In the dockerfile i can just do Export ${PORT}. But since i cant read the .env from the docker-compose location i am not able to expose the port in the same way. Is it possible to just have a wildcard to expose the port of the containers to the same port on my host like:
ports:
- *:*
No, there's no syntax like this. The ports: syntax always requires you to specify the container-side port, and you must have a ports: block if you want the container to be accessible from outside Docker.
If you weren't using Compose there is a docker run -P (capital P) option, but there the container ports are published on randomly-selected host ports. (This is one of the few contexts where "expose" as a Docker verb does anything useful: docker run -P publishes all exposed ports.)
However: there is no rule that the two port numbers must match. Rather than having the port number configurable in the Dockerfile (requiring a rebuild on any change) it's more common to use a fixed port number in the image and allow the host port number to be configured at deploy time.
For example, assume both images are configured to use the default Express port 3000. You can remap these when you run the containers:
version: '3.8'
services:
backend:
build: ./backend
ports: ['8100:3000']
frontend:
build: ./frontend
depends_on: [backend]
ports: ['4200:3000']
I am using docker for windows and have a docker compose file that is creating a couple of customer applications as well as a RabbitMq and a Seq container. These are all talking to each other via instance names on the local network created by docker-compose, for example;
version: '3.4'
services:
legacydata.workerservice:
container_name: legacydata.workerservice
image: ${DOCKER_REGISTRY-}legacydataworkerservice
build:
context: .
dockerfile: LegacyData.Worker/Dockerfile
depends_on:
- rabbitmq
legacydata.consumer:
container_name: legacydata.consumer
image: ${DOCKER_REGISTRY-}legacydataconsumer
build:
context: .
dockerfile: LegacyData.Consumer/Dockerfile
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq
environment:
- RABBITMQ_ERLANG_COOKIE='secretcookie'
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=password
ports:
- 5672:5672
- 15672:15672
## Move Seq to Azure ACI
# seq:
# image: datalust/seq:latest
# container_name: seq
# ports:
# - 5341:80
# environment:
# ACCEPT_EULA: Y
I want move the Seq instance into Azure ACI (I have this running and I can access it as expected for example http://myseq.southuk.azurecontainer.io).
How do I configure the docker to allow my local containers to access both each other, and internet resources?
Actually, you can create a container group contains all the containers, the containers inside a container group can communicate with each other with the ports that they have exposed. You follow the example here. And in default, the containers also can access the Internet with no problem.
The only problem is that you need to create all the images yourself locally from the Dockerfile. ACI does not support to create the images for you.
When I run docker-compose up, it tells me its listening on localhost port 5000, despite port 80 configuration in my docker-compose.override.yml file, and despite exposing port 80 in the Dockerfile.
I've also tried setting the port in docker-compose.yml but it still defaults to port 5000
docker-compose.yml
version: '3'
services:
myapp:
image: me/myapp
environment:
- ASPNETCORE_ENVIRONMENT=Production
build:
context: .
dockerfile: MyApp.API/Dockerfile
docker-compose.override.yml
version: '3'
services:
myapp:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "80:80"
Dockerfile:
...
EXPOSE 80
...
What am I missing that would force docker-compose up to listen on the port specified in the yml?
Its important with what kind of configuration you start the ASP.NET application. You can do that by putting it into the environment like that:
Dockerfile
ENV ASPNETCORE_URLS=http://*:80
Alternatively you an also define it in your docker-compose.yml:
version: '3'
services:
myapp:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://*:80
ports:
- "80:80"
The EXPOSE instruction in Dockerfile does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
In your case all control is done via docker compose and I would not worry about seeing port 5000 as it is still going to be inaccessible.
What is the use of container_name in docker-compose.yml file? Can I use it as hostname which is nothing but the service name in docker-compose.yml file.
Also when I explicitly write hostname under services does it override the hostname represented by service name?
hostname: just sets what the container believes its own hostname is. In the unusual event you got a shell inside the container, it might show up in the prompt. It has no effect on anything outside, and there’s usually no point in setting it. (It has basically the same effect as hostname(1): that command doesn’t cause anything outside your host to know the name you set.)
container_name: sets the actual name of the container when it runs, rather than letting Docker Compose generate it. If this name is different from the name of the block in services:, both names will be usable as DNS names for inter-container communication. Unless you need to use docker to manage a container that Compose started, you usually don’t need to set this either.
If you omit both of these settings, one container can reach another (provided they’re in the same Docker Compose file and have compatible networks: settings) using the name of the services: block and the port the service inside the container is listening in.
version: '3'
services:
redis:
image: redis
db:
image: mysql
ports: [6033:3306]
app:
build: .
ports: [12345:8990]
env:
REDIS_HOST: redis
REDIS_PORT: 6379
MYSQL_HOST: db
MYSQL_PORT: 3306
The easiest answer is the following:
container_name: This is the container name that you see from the host machine when listing the running containers with the docker container ls command.
hostname: The hostname of the container. Actually, the name that you define here is going to the /etc/hosts file:
$ exec -it myserver /bin/bash
bash-4.2# cat /etc/hosts
127.0.0.1 localhost
172.18.0.2 myserver
That means you can ping machines by that names within a Docker network.
I highly suggest set these two parameters the same to avoid confusion.
An example docker-compose.yml file:
version: '3'
services:
database-server:
image: ...
container_name: database-server
hostname: database-server
ports:
- "xxxx:yyyy"
web-server:
image: ...
container_name: web-server
hostname: web-server
ports:
- "xxxx:xxxx"
- "5101:4001" # debug port
you can customize the image name to build & container name during docker-compose up for this, you need to mention like below in docker-compose.yml file.
It will create an image & container with custom names.
version: '3'
services:
frontend_dev:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: .
dockerfile: Dockerfile.dev
image: "mycustomname/sample:v1"
container_name: mycustomname_sample_v1
ports:
- '3000:3000'
volumes:
- /app/node_modules
- .:/app
I have 2 docker compose files:
first:
version: '3'
services:
service1:
ports:
- "8081:8080"
...
second:
version: '3'
services:
service2:
ports:
- "8088:8088"
from service2 I try to execute http request to server1:
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://service1:8081/api/v1/test/": service1; nested exception is java.net.UnknownHostException:service1] with root cause
router |
router | java.net.UnknownHostException: resource.mng
router | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) ~[na:1.8.0_171]
router | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_171]
How can I fix it?
Create a external network if it doesn't exist -
$ docker network create service || true
Define external network to both compose files (first & second) -
.........
ports:
- "8088:8088"
networks:
- service
networks:
service:
external: true
Do a up -d & now you should be able to reach service1 container with name service1 from service2 container.
Similarly, you can also use the default network create by compose but it will prefix your current directory name to the network name defined. You can also use the host network mode but that's not suggested.
Update 1 -
docker compose creates a default network with prefix directory name.
https://docs.docker.com/compose/networking/
Recently in docker compsoe 3.5, they launched custom name feature. So in case you can use compose 3.5, you can opt for giving a custom name to your docker compose network. Compose will create a new network in case it doesn't exist.(preferred)
https://docs.docker.com/compose/compose-file/#name-1
You can take a call depending on your requirement. If you are on shell, you can do below shell trick to make the compose create a network only if it doesn't exist & ignore if pre-created.
$ docker network create service || true
Two ways:
1.
Service1 and Service2 should be in same network!!
docker network create -d overlay --attachable my_net
first:
...
service1:
ports:
- "8081:8080"
networks:
- net
...
networks:
net:
external: true
name: my_net
second:
...
service2:
ports:
- "8088:8088"
networks:
- net
...
networks:
net:
external: true
name: my_net
in this case request will be http://service1:8080/api/v1/test/
2.
You are forwarding port to the host - "8081:8080"
Ip of docker host is 172.18.0.1 by default (check your ifconfig)
For this case request will be http://172.18.0.1:8081/api/v1/test/
Answer to the question "Is it possible to ask docker-compose create a network if it doesn't exist?":
Yes it possible.
In 'first' compose file:
version: "3.3"
...
...
networks:
my_net:
driver: overlay
attachable: true
If you are running docker stack deploy -c file.yml hello
-> will be created network with name hello_my_net
If you are running docker-compose up -d
-> will be created network with name directory-name_my_net (name of directory where is docker-compose.yml file)
So pay attention to name of network.
Next services attaches to this network.
!Attention. Version of compose file should be >= "3.3"