Is it possible to set in the docker compose file the container information about NetworkSetting into the environment variable?
I have the following docker-compose.yml file:
version: '3.7'
services:
sdt-proxy:
image: myimage
ports:
- 32770-32780:8181
environment:
- SERVER_PORT=8181
It maps the port 8181 to a random port from 32770-32780. when I run the container with docker-compose up, I can see the mapped port with docker inspect:
.....
"NetworkSettings": {
"Bridge": "",
"SandboxID": "83e6933aaf7b09b8ae1238d3dbb71bdd495c14927a5a509b332afc17cda6d854",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8181/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32771"
}
]
},
...
So, I know that the internal port 8181 (my application running inside the container), is mapped to the port 32771.
I need to pass this information, the container port 32771 to my application, is it possible to do something like this in the docker-compose file?
version: '3.7'
services:
sdt-proxy:
image: myimage
ports:
- 32770-32780:8181
environment:
- SERVER_PORT=8181
- MY_CONTAINER_PORT= <the running container port 32771>
Related
I have a docker-compose used for production which I'm hoping to incorporate with VS Code's dockerized development environment.
./docker-compose.yml
version: "3.6"
services:
django: &django-base
build:
context: .
dockerfile: backend/Dockerfile_local
restart: on-failure
volumes:
- .:/website
depends_on:
- memcached
- postgres
- redis
networks:
- main
ports:
- 8000:8000 # HTTP port
- 3000:3000 # ptvsd debugging port
expose:
- "3000"
env_file:
- variables/django_local.env
...
Note how I'm both forwarding and exposing port 3000 here. This is a result of me playing around to get what I need working. Not sure if I need one or the other or both.
My ./devcontainer then looks like the following:
./devcontainer/devcontainer.json
{
"name": "Dev Container",
"dockerComposeFile": ["../docker-compose.yml", "docker-compose.extend.yml"],
"service": "dev",
"workspaceFolder": "/workspace",
"shutdownAction": "stopCompose",
"settings": {
"terminal.integrated.shell.linux": null,
"python.linting.pylintEnabled": true,
"python.pythonPath": "/usr/local/bin/python3.8"
},
"extensions": [
"ms-python.python"
]
}
.devcontainer/docker-compose.extended.yml
version: '3.6'
services:
dev:
build:
context: .
dockerfile: ./Dockerfile
external_links:
- django
volumes:
- .:/workspace:cached
command: /bin/sh -c "while sleep 1000; do :; done"
The idea is that I want to be able to run VS code attached to the dev service, which from there I want to run the debugger attached to the django service using the following launch.json config:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/website"
}
]
},
I get an error when doing this though, where VS Code says connect ECONNREFUSED 127.0.0.1:3000. How can I get the ports mapped so this will work? Is it even possible?
Edit
Why not just attach directly to the django service?
The dev container simply contains python and node runtimes for linting and intellisense purposes while using VS Code. The idea behind creating a new service devoted specifically to debugging in the dev environment is that ./docker-compose.yml contains more than a few services that some of the devs on my team like to turn off sometimes to keep resource consumption low. By creating a container specifically for dev, it also makes it easier to setup .devcontainer-devcontainer.json to add things like extensions to one container without needing to add them after attaching to the running "non-dev" container. If this were to work, VS Code would be running within the dev container (see this).
I was able to solve this by changing the host in the launch.json from localhost to host.docker.internal. The resulting launch.json configuration then looks like this:
{
"name": "WP",
"type": "python",
"request": "attach",
"port": 3000,
"host": "host.docker.internal",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/web-portal"
}
]
},
I have a docker-compose networking issue. So i create my shared space with containers for ubuntu, tensorflow, and Rstudio, which do an excellent job in sharing the volume between them and the host, but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it. My docker-compose.yaml:
# docker-compose.yml
version: '3'
services:
#ubuntu(16.04)
ubuntu:
image: ubuntu_base
build:
context: .
dockerfile: dockerfileBase
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
ports:
- "8081:8081"
tty: true
#tensorflow
tensorflow:
image: tensorflow_jupyter
build:
context: .
dockerfile: dockerfileTensorflow
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
- .:/notebooks
networks:
- default
ports:
- "8888:8888"
tty: true
#rstudio
rstudio:
image: rstudio1
build:
context: .
dockerfile: dockerfileRstudio1
volumes:
- "/data/data_vol/:/data/data_vol/:Z"
networks:
- default
environment:
- PASSWORD=test
ports:
- "8787:8787"
tty: true
volumes:
ubuntu:
tensorflow:
rstudio:
networks:
default:
driver: bridge
I am quite a docker novice, so I'm not sure about my network settings. That being said the docker inspect composetest_default (the default network created for the compose) shows the containers are connected to the network. It is my understanding that in this kind of situation I should be able to freely call one service in each one of the other containers and vice-versa:
"Containers": {
"83065ec7c84de22a1f91242b42d41b293e622528d4ef6819132325fde1d37164": {
"Name": "composetest_ubuntu_1",
"EndpointID": "0dbf6b889eb9f818cfafbe6523f020c862b2040b0162ffbcaebfbdc9395d1aa2",
"MacAddress": "02:42:c0:a8:40:04",
"IPv4Address": "192.168.64.4/20",
"IPv6Address": ""
},
"8a2e44a6d39abd246097cb9e5792a45ca25feee16c7c2e6a64fb1cee436631ff": {
"Name": "composetest_rstudio_1",
"EndpointID": "d7104ac8aaa089d4b679cc2a699ed7ab3592f4f549041fd35e5d2efe0a5d256a",
"MacAddress": "02:42:c0:a8:40:03",
"IPv4Address": "192.168.64.3/20",
"IPv6Address": ""
},
"ea51749aedb1ec28f5ba56139c5e948af90213d914630780a3a2d2ed8ec9c732": {
"Name": "composetest_tensorflow_1",
"EndpointID": "248e7b2f163cff2c1388c1c69196bea93369434d91cdedd67933c970ff160022",
"MacAddress": "02:42:c0:a8:40:02",
"IPv4Address": "192.168.64.2/20",
"IPv6Address": ""
}
A pre-history - I had tried with links: inside the docker-compose but decided to change to networks: on account of some warnings of deprecation. Was this the right way to go about it?
Docker version 18.09.1
Docker-compose version 1.17.1
but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it.
You cannot use linux programs the are in the bin path of a container from another container, but you can use any service that is designed to communicate over a network from any container in your docker compose file.
Bin path:
$ echo $PATH 127 ↵
/home/exadra37/bin:/home/exadra37/bin:/home/exadra37/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
So the programs in this paths that are not designed to communicate over a network are not usable from other containers and need to be installed in each container you need them,like python.
I am deploying a compose file onto the UCP via:
docker stack deploy -c docker-compose.yml custom-stack-name
In the end I want to deploy multiple compose files (each compose file describes the setup for a separate microservice) onto one docker network e.g. appsnetwork
version: "3"
services:
service1:
image: docker/service1
networks:
- appsnetwork
customservice2:
image: myprivaterepo/imageforcustomservice2
networks:
- appsnetwork
networks:
appsnetwork:
The docker stack deploy command automatically creates a new network with a generated name like this: custom-stack-name_appsnetwork
What are my options?
Try to create the network yourself first
docker network create --driver=overlay --scope=swarm appsnetwork
After that make the network external in your compose
version: "3"
services:
service1:
image: nginx
networks:
- appsnetwork
networks:
appsnetwork:
external: true
After that running two copies of the stack
docker stack deploy --compose-file docker-compose.yml stack1
docker stack deploy --compose-file docker-compose.yml stack2
Docker inspect for both shows IP in same network
$ docker inspect 369b610110a9
...
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.5"
},
"Links": null,
"Aliases": [
"369b610110a9"
],
$ docker inspect e8b8cc1a81ed
"Networks": {
"appsnetwork": {
"IPAMConfig": {
"IPv4Address": "10.0.1.3"
},
"Links": null,
"Aliases": [
"e8b8cc1a81ed"
],
Suppose I have an application that listens on 8888 - other parts of the application want to continue to access it on 8888 - but the external users need to access it at a port range above 50000 - eg 50888.
What I'd like to do in my docker-compose.yml is:
ports:
- "8888:8888"
- "50888:8888"
Will this work?
My other alternative is to add an ambassador in there like this:
blah:
image: blah:6
ports:
- "8888:8888"
container_name: blah
networks:
default: {}
blah_ambassador:
image: svendowideit/ambassador
links:
- blah
ports:
- "50888:8888"
environment:
- BLAH_PORT_8888_TCP:tcp://blah:8888
container_name: ops_ambassador
networks:
default: {}
My question is: Will docker-compose allow mapping a port to two ports or do I need an ambassador?
Some time ago, docker-compose used a dictionary to store the mapping ports and the key was the internal port, so one value overrode the other.
This was fixed here using a list. So, currently, docker-compose allows mapping an internal port to two ports. Maybe you are using an older docker-compose version.
Example:
→ docker-compose -v
docker-compose version 1.8.0, build f3628c7
Docker-compose file content (docker-compose.yml):
backend:
image: your_image
ports:
- 3000:3000
- 8888:3000
docker inspect command: docker inspect your_container_id
"Ports": {
"3000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8888"
},
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
]
},
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "docker-socket",
"host": {
"sourcePath": "/var/run/docker.sock"
}
}
],
"containerDefinitions": [
{
"name": "nginx",
"image": "nginx",
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "demo.local"
}
],
"essential": true,
"memory": 128
},
{
"name": "nginx-proxy",
"image": "jwilder/nginx-proxy",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"mountPoints": [
{
"sourceVolume": "docker-socket",
"containerPath": "/tmp/docker.sock",
"readOnly": true
}
]
}
]
}
Running this locally using "eb local run" results in:
ERROR: you need to share your Docker host socket with a volume at
/tmp/docker.sock Typically you should run your jwilder/nginx-proxy
with: -v /var/run/docker.sock:/tmp/docker.sock:ro See the
documentation at http://git.io/vZaGJ
If I ssh into my docker machine and run:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock
jwilder/nginx-proxy
It creates the container and mounts the volumes correctly.
Why is the above Dockerrun.aws.json configuration not mounting the /var/run/docker.sock:/tmp/docker.sock volume correctly?
If I run the same configuration from a docker-compose.yml, it works fine locally. However, I want to deploy this same configuration to Elastic Beanstalk using a Dockerrun.aws.json:
version: '2'
services:
nginx:
image: nginx
container_name: nginx
cpu_shares: 100
volumes:
- /var/www/html
environment:
- VIRTUAL_HOST=demo.local
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
cpu_shares: 100
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
My local setup is using:
VirtualBox 5.0.22 r108108
docker-machine version 0.7.0, build a650a40
EB CLI 3.7.7 (Python 2.7.1)
Your Dockerrun.aws.json file works fine in AWS EB for me (only changed it slightly to use our own container / hostname in place of the 'nginx' container). Is it just a problem with the 'eb local run' setup, perhaps?
As you said are on Mac, try upgrading to the new docker 1.12 that runs docker natively on osx, or at least a newer version of docker-machine - https://docs.docker.com/machine/install-machine/#/installing-machine-directly