Hi guys let me as about Docker..
so I have more Docker container and I successfully to connect container to each container with same network...
but if I set a proxy connection it make my container can not connect to each other container... what must i do guys?
here is my docker-compose.yml file
version: '3.7'
services:
membership:
container_name: membership_service
build:
context: ./membership
target: dev
ports:
- 6010:6010
volumes:
- ./membership:/app
network_mode: simas_net
proxy:
container_name: nginx_proxy
build:
context: ./proxy
target: dev
ports:
- 8000:80
volumes:
- ./proxy/config:/etc/nginx/conf.d
network_mode: simas_net
and here is proxy config
~/.docker/config.json
{
"proxies": {
"default": {
"httpProxy": "http://192.168.49.1:8000/",
"httpsProxy": "http://192.168.49.1:8000/",
"noProxy": "localhost,127.0.0.0/8"
}
}
}
thanks for you're help guys
This is not a big problem. Your ~/.docker/config.json is configured to automatically set three proxy environment variables on starting containers.
http_proxy
https_proxy
no_proxy
Now the software inside your containers seems to react to those variables (which is expected, you usually configure those settings, to have the containers use the proxy)
In your case however it seems that your server software picks those settings up and automatically tries to connect to the other container hostname over the proxy -> which fails because the proxy is not inside the container network, therefore he's not able to help establishing the connection.
multiple solutions possible:
configure the system which can't connect to avoid the proxy/direct connect. you might want to limit it to:
the service (container name) it can't connect at the moment
manually add a no_proxy value with the others container name -> hope that the software will interpret it correctly (no_proxy is not standardized)
remove the general proxy configuration and explicitly set the env variables (http_proxy, https_proxy, no_proxy) inside your compose files for services who need this proxy for communication
add the container name of the target service to your noProxy config - not recommended -> needs deamon restarts everytime you run into this problem with a new container etc. might be okay, if the containers will not change in your setup.
Related
Im trying to run an 2 WebRTC example(using mediasoup) in docker
I want to run two servers as I am working on video calling across a set of instances!
My Error:
Have you seen this Error:
createProducerTransport null Error: port bind failed due to address not available [transport:udp, ip:'172.17.0.1', port:50517, attempt:1/50000]
I think it's something to do with setting the docker network?
docker-compose.yml
version: "3"
services:
db:
image: mysql
restart: always
app:
image: app
build: .
ports:
- "1440:443"
- "2000-2020"
- "80:8080"
depends_on:
- db
app2:
image: app
build: .
ports:
- "1441:443"
- "2000-2020"
- "81:8080"
depends_on:
- db
Dockerfile
FROM node:12
WORKDIR /app
COPY . .
CMD npm start
It sais it couldn't bind the address so it could be the ip or the port that causes the problem.
The ip seems like it's the ip of the docker instance. although of the docker instances are in two different machines it should be the ip of the server and not the docker instance. (in the mediasoup settings)
There are also ports of the rtcp connection that have to be opened in the docker instance. They are normally also in the mediasouo config file. usually a range of a few hundred ports that need to be opened.
You should set your rtc min and max port to 2000 and 2020 for testing purpose. Also you are not forwarding these ports I guess. In docker-compose use 2000-2020:2000-2020 Also make sure to set your listenIps properly.
If you are running mediasoup in docker, container where mediasoup is installed should be run in network mode host.
This is explained here:
How to use host network for docker compose?
and official docs
https://docs.docker.com/network/host/
Also you should pay attention to mediasoup configuration settings webRtcTransport.listenIps and plainRtpTransport.listenIp they should tell client on which IP address is your mediasoup server listening.
I am trying to start containers with the docker-compose.yml. I am trying to start two services, one is mongo and other is OHIF viewer.
Currently I am able to access mongo locally (localhost:27017(after port-forwarding) in desktop whereas OHIF viewer isn't possible (ports aren't visible/empty so, I am not able to access them locally). Can you guide me as to how I can set them?
As you can see from my docker-compose file that I have set network_mode:"host" to be able to access them locally in my desktop as well.
Based on my json file, I thought the port was already set (pacsIP:8042) but it's missing as shown in screenshot above when I execute "docker ps" command. Can you guide me on this? I am new to docker and your inputs will definitely be helpful. PACSIP is my docker host (remote linux server) IP. I would like to port forward them and view it in my desktop
Please find below the docker-compose.yml file
version: '3.6'
services:
mongo:
image: "mongo:latest"
container_name: ohif-mongo
ports:
- "27017:27017"
viewer:
image: ohif/viewer:latest
container_name: ohif-viewer
ports:
- "3030:80"
- "8042:8042" - # Not sure whether this is correct. I tried with and without this as well but it didn't work
network_mode: "host"
environment:
- MONGO_URL=mongodb://mongo:27017/ohif
extra_hosts:
- "pacsIP:172.xx.xxx.xxx"
volumes:
- ./dockersupport-app.json:/app/app.json
As you can see that in the volumes, I am using a dockersupport-app.json file which is given below
{
"apps" : [{
"name" : "ohif-viewer",
"script" : "main.js",
"watch" : true,
"merge_logs" : true,
"cwd" : "/app/bundle/",
"env": {
"METEOR_SETTINGS": {
"servers": {
"dicomWeb": [
{
"name": "Orthanc",
"wadoUriRoot": "http://pacsIP:8042/wado", # these ports
"qidoRoot": "http://pacsIP:8042/dicom-web", #these ports
"wadoRoot": "http://pacsIP:8042/dicom-web", #these ports
"qidoSupportsIncludeField": false,
"imageRendering": "wadouri",
"thumbnailRendering": "wadouri",
"requestOptions": {
"auth": "orthanc:orthanc",
"logRequests": true,
"logResponses": false,
How can I access the OHIF-Viewer locally? What changes should I make to docker-compose.yml or json file? I did with and without port 8042 under "Ports" section of docker-compose file but it didn't work still.
Did you use docker-compose run or docker-compose up?
According to docker documentation: "docker-compose run command does not create any of the ports specified in the service configuration."
Try to use docker-compose up command.
If you use network_mode: host it bypasses all of Docker's standard networking. In this case that includes the port mappings: since the container is directly using the host's network interface, there's nothing to map per se.
network_mode: host is almost never necessary and I'd remove it here. That should make the ports visible in the docker ps output again, and make the remapped port 3030 accessible. As it is you can probably reach your service on port 80, which presumably the service binds to, directly on the host network.
I am having trouble getting extensions installed in the dev container using "Remote - Containers". I do not know if it's a bug, incorrect configuration on my end, or intended behaviour. Down below is my current configuration, both files are located in the root folder of the project.
docker-compose.yml
version: "3.7"
services:
api:
image: node:12
restart: always
ports:
- ${API_PORT}:3000
volumes:
- .:/usr/app
- /usr/app/node_modules
working_dir: /usr/app
command: bash -c "yarn && yarn dev"
.devcontainer.json
{
"dockerComposeFile": "docker-compose.yml",
"service": "api",
"workspaceFolder": "/usr/app",
"extensions": [
"eamodio.gitlens",
"formulahendry.auto-rename-tag",
"coenraads.bracket-pair-colorizer-2",
"kumar-harsh.graphql-for-vscode",
"esbenp.prettier-vscode",
"ms-vscode.vscode-typescript-tslint-plugin",
"visualstudioexptteam.vscodeintellicode"
]
}
The list of extensions listed in the .devontainer.json are the ones I want to have installed in the dev container. Any help is appreciated!
According to the Visual Studio Code documentation, the two files need to be located in a directory .devcontainer in the workspace root.
I still had issues installing the extensions while working from behind a corporate proxy. The solution was to give the container access to the proxy server:
If you use an authenticating proxy like Cntlm on your local machine, configure it to listen on 172.17.0.1 (the Docker interface). Then define the http_proxy and https_proxy environment variables for the container. For example, in devcontainer.json:
"containerEnv": {
"http_proxy": "http://172.17.0.1:3128",
"https_proxy": "http://172.17.0.1:3128"
}
Or in docker-compose.yaml
services:
devcontainer:
environment:
http_proxy: http://172.17.0.1:3128
https_proxy: http://172.17.0.1:3128
Or configure docker-compose.yaml to make the container use the host network:
services:
devcontainer:
network_mode: host
Then you can just pass the same proxy variables into the container as used on the host. For example, in the docker-compose.yaml:
services:
devcontainer:
environment:
http_proxy: $http_proxy
https_proxy: $https_proxy
If you are not using a local, but rather a remote proxy inside of your network, you can do the latter regardless of the container's network configuration (host or default).
This answer is only applicable if your network environment requires the use of a proxy server.
According to "Known limitations" of "Developing inside a Container"...
Local proxy settings are not reused inside the container, which can prevent extensions from working unless the appropriate proxy information is configured (for example global HTTP_PROXY or HTTPS_PROXY environment variables with the appropriate proxy information).
I was able to set the proxy environment variables by appending to runArgs in 'devcontainer.json': "--net=host", "--env=https_proxy=(your_proxy_host:port)".
Alternatively, if host-access is not required for the proxy, you can just append to '.devcontainer/Dockerfile':
ENV https_proxy=(your_proxy_host:port)
Enabling proxy access in this way may also be necessary for network access from your application (not just for vscode extensions).
See also: How do I pass environment variables to Docker containers?
I also have issues installing the extensions while working from behind a corporate proxy. The solution was to give the container access to the proxy server and set HTTP proxy strict SSL:
"settings": {
"http.proxy": "(your_proxy_host:port)",
"http.proxyStrictSSL": false
},
I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.
/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.