I have specified port-mapping in docker-compose, but it is still not working, i still have to access site using the port no specified in expose
below is my docker-compose.yml
version: '2'
networks:
default:
external:
name: nat
services:
website:
build:
context: '.'
dockerfile: "./iis.dockerfile"
ports:
- 3000:8081
and the corrosponding dockerfile
FROM microsoft/iis
RUN ["powershell.exe", "Install-WindowsFeature NET-Framework-45-ASPNET"]
RUN ["powershell.exe", "Install-WindowsFeature Web-Asp-Net45"]
ADD www/ c:\\webapp
EXPOSE 8081
RUN powershell New-Website -Name 'web-app' -Port 8081 -PhysicalPath 'c:\webapp' -ApplicationPool '.NET v4.5'
I have to access app using: http://192.168.105.33:8081/, if I do it using port 3000 it does not work.
Is there anything missing in my above configuration?? OS: windows server 2016, using windows containers with hyper-v and docker-compose up -d to get containers up and running...
docker-compose inspect gives me below output
"Ports": {
"8081/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
}
]
},
docker ps gives me below output
eb7aa1e74b7f website_website "C:\\ServiceMonitor..." 14 minutes ago Up 14 minutes 0.0.0.0:3000->
8081/tcp website_website_1
dokcer-compose ps gives me below output
Name Command State Ports
--------------------------------------------------------------------------------
website_website_1 C:\ServiceMonitor.exe w3svc Up 0.0.0.0:3000->8081/tcp
So I should be able to access it on host using port 3000.
Maybe you're launching your container with docker-compose run? There is a difference in port mapping between docker-compose run and docker-compose up [-d] [service] In this case the port configuration will be ignored by design
You can use --service-ports flag or manually expose them using -p flag
Related
I am trying to run a local mosquitto broker, publisher and subscriber setup via docker and docker-compose, but the publisher cannot connect to the broker. However, connecting to local broker via cli works fine.
Getting following error when running below setup.
{ Error: connect ECONNREFUSED 127.0.0.1:1883
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1088:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 1883 }
Local dockerized setup:
docker-compose.yml:
version: "3.5"
services:
publisher:
hostname: publisher
container_name: publisher
build:
context: ./
dockerfile: dev.Dockerfile
command: npm start
networks:
- default
depends_on:
- broker
broker:
image: eclipse-mosquitto
hostname: mosquitto-broker
container_name: mosquitto-broker
networks:
- default
ports:
- "1883:1883"
networks:
default:
dev.Dockerfile:
FROM node:11-alpine
RUN mkdir app
WORKDIR app
COPY package*.json ./
RUN npm ci
COPY ./src ./src
CMD npm start
src/index.js:
const mqtt = require("mqtt");
const client = mqtt.connect("mqtt://localhost:1883");
client.on("connect", () => {
console.log("Start publishing...");
client.publish("testTopic", "test");
});
client.on("error", (error) => {
console.error(error);
});
However, if I connect to the mosquitto broker via mqtt-js cli, it works as expected. E.g.
mqtt sub -t 'testTopic' -h 'localhost' and mqtt pub -t 'testTopic' -h 'localhost' -m 'from MQTT.js'.
What am I missing?
your publisher container and broker are running in two different containers that's mean that they are two different machines each machine has it's own ip.
you can't call broker service from your publisher container by using localhost:1883 and vice verse , from broker to publisher container
To reach broker container you have to call container ip or name or service name
in your case change mqtt.connect("mqtt://localhost:1883"); value to be mqtt.connect("mqtt://broker:1883"); and give it a try
The publisher and broker run in different containers, meaning they have different IPs.
When the publisher is trying to reach the broker at localhost:1883, it is normal to receive a ECONNREFUSED, hence the broker is not in the same container.
You should replace the 127.0.0.1 or localhost with the service name of the broker(broker in this case). The service name will be resolved to the correct IP of the broker container.
in your index.js you should change "localhost" to "broker". When inside a container "localhost" will resolve to that specific container so you should always use the service name instead and docker will take care of the routing to that specific service. Also by default all service in the same compose file are added to the same network so there is no need to specify it.
So basically change this: const client = mqtt.connect("mqtt://localhost:1883");
To this: const client = mqtt.connect("mqtt://broker:1883");
I am trying to inspect a deno app that is ran inside a docker container with docker-compose.
docker-compose configuration is as follows:
services:
api_bo:
image: denoland/deno:debian-1.23.4
volumes:
- type: bind
source: .
target: /usr/src
ports:
- 9229:9229
- 6005:3000
command: bash -c "cd /usr/src/packages/api_bo && deno task inspect"
depends_on:
- mongo_db
environment:
- MONGO_URL=mongodb://mongo_db:27017/academy_db
- DB_NAME=academy_db
- PORT=3000
deno.json is as follows:
{
"compilerOptions": {
"allowJs": false,
"strict": true
},
"lint": {
"files": {
"include": ["src/"],
"exclude": ["src/types.ts"]
},
"rules": {
"tags": ["recommended"],
"include": [
"ban-untagged-todo",
"no-explicit-any",
"no-implicit-any",
"explicit-function-return-type"
],
"exclude": ["no-debugger", "no-console"]
}
},
"tasks": {
"start": "deno run -A --watch src/app.ts",
"inspect": "deno run -A --inspect src/app.ts"
},
"importMap": "../../import_map.json"
}
Chrome with chrome://inspect does not detect the running process.
When running out of docker with deno run, it works just fine.
It seems that deno only listens to connections from 0.0.0.0 and thus it cannot be inspected using docker port forwarding.
Deno and NodeJS share the same Inspector Protocol from V8, for more see V8 / Docs / Inspector.
And also (lucky) the same parameter "--inspect=[HOST:PORT]" and "--inspect-brk=[HOST:PORT]" so on,
for more see NodeJS / API / Inspect or NodeJS / API / Inspect Brk (THIS is the documentation from NODEJS, so be careful!)
The main "problem" (security reasons) is that NodeJS and Deno Inspector Protocol listen only to localhost / 127.0.0.1 and Docker can't and won't forward these ports. But with the parameter "--inspect" you can change the Host and Port.
Deno CLI
deno run --inspect=0.0.0.0:9229 ./src/my-big-cool-file.ts
Dockerfile
# ...
EXPOSE 9229
# ...
CMD ["run", "--inspect=0.0.0.0:9229", "...", "./src/main.ts"]
TLDR;
your deno.json
//...
"tasks": {
"start": "deno run -A --watch src/app.ts",
"inspect": "deno run -A --inspect=0.0.0.0:9229 src/app.ts"
},
//...
your docker-compose.yml
services:
# ...
api_bo:
# ...
ports:
# THIS IS IMPORTANT, FORWARD DEBUG PROTOCOLS ONLY TO YOUR LOCALHOST! **1
- 127.0.0.1:9229:9229
**1 except: when you need it, and you know what you are doing!!!
I have this docker-compose.yml that basically builds my project for e2e test. It's composed of a postgres db, a backend Node app, a frontend Node app, and a spec app which runs the e2e test using cypress.
version: '3'
services:
database:
image: 'postgres'
backend:
build: ./backend
command: /bin/bash -c "sleep 3; yarn backpack dev"
depends_on:
- database
frontend:
build: ./frontend
command: /bin/bash -c "sleep 15; yarn nuxt"
depends_on:
- backend
spec:
build:
context: ./frontend
dockerfile: Dockerfile.e2e
command: /bin/bash -c "sleep 30; yarn cypress run"
depends_on:
- frontend
- backend
The Dockerfiles are just simple Dockerfiles that based off node:8 which copies the project files and run yarn install. In the spec Dockerfile, I pass http://frontend:3000 as FRONTEND_URL.
But this setup fails at the spec command when my cypress runner can't connect to frontend with error:
spec_1 | > Error: connect ECONNREFUSED 172.20.0.4:3000
As you can see, it resolves the hostname frontend to the IP correctly, but it's not able to connect. I'm scratching my head over why can't I connect to the frontend with the service name. If I switch the command on spec to do sleep 30; ping frontend, it's successfully pinging the container. I've tried deleting and let docker-compose recreate the network, I've tried specifying expose and links to the services respectively. All to no success.
I've set up a sample repo here if you wanna try replicating the issue:
https://github.com/afifsohaili/demo-dockercompose-network
Any help is greatly appreciated! Thank you!
Your application is listening on loopback:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.11:35233 *:*
LISTEN 0 128 127.0.0.1:3000 *:*
From outside of the container, you cannot connect to ports that are only listening on loopback (127.0.0.1). You need to reconfigure your application to listen on all interfaces (0.0.0.0).
For your app, in the package.json, you can add (according to the nuxt faq):
"config": {
"nuxt": {
"host": "0.0.0.0",
"port": "3000"
}
},
Then you should see:
$ docker run --rm --net container:demo-dockercompose-network_frontend_1 nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:3000 *:*
LISTEN 0 128 127.0.0.11:39195 *:*
And instead of an unreachable error, you'll now get a 500:
...
frontend_1 | response: undefined,
frontend_1 | statusCode: 500,
frontend_1 | name: 'NuxtServerError' }
...
spec_1 | The response we received from your web server was:
spec_1 |
spec_1 | > 500: Server Error
First of all, give thanks for reading my question and try to help me and apologize for my English.
I'm trying to run a docker with a express server project and mongo, but build Dockerfile perfectly but I do:
docker logs -f name
and show next error:
/usr/local/bin/docker-entrypoint-sh line 340: exec: npm not found
And, I think that mongodb doesnt run
Why doesnt recognize npm if I add node?
How can run npm and launch mongo?
I'm using ubuntu 17.10
Here is my Dockerfile:
# Install Node
FROM node:latest
# Install Mongo
FROM mongo:4.0.0-xenial
# Author
MAINTAINER MachineGun
# Create user Ubuntu 17.10 (64 bits)
RUN adduser --disabled-login dockeruser
# Work Directory
WORKDIR /home/expressserver
# Copy express project to docker
COPY expressserver expressserver
# Defaul user
USER dockeruser
# Config cointainer PORT:
# MongoDB listening on port: 27017
# Server listening on port: 8080
EXPOSE 27017 8080
# Exec MongoDB
CMD [mongo]
# Exec server with custom npm start
CMD ["npm", "run", "start:dev"]
Also, in app.js I use mongoose with next uri: mongodb://localhost:27017/ExpressServer, its ok?
mongoose.connection.openUri('mongodb://localhost:27017/ExpressServer', { useNewUrlParser: true }, (err, res) => {
if (err) {
console.log('Error: Database not running on port 27017: \x1b[31m%s\x1b[0m', 'offline');
// console.log('throw err: ', err);
throw err;
}
console.log('Database running on port 27017: \x1b[32m%s\x1b[0m', 'online');
});
I've solved the problem :D
Thanks a lot :D
Finally I used docker compose...
Here is my docker-compose.yml:
version: '2'
services:
server:
container_name: expressserver
build: ./
ports:
- "8080:8080"
links:
- mongo
mongo:
container_name: mongoDB
image: mongo:latest
volumes:
- /var/lib/mongodb:/data/db
ports:
- "27017:27017"
command: mongod --port 27017
Here is my Dockerfile:
FROM node:carbon
MAINTAINER MachineGun
RUN adduser --disabled-login dockeruser
WORKDIR /home/dockeruser
COPY expressserver expressserver
WORKDIR /home/dockeruser/expressserver
RUN npm install
USER dockeruser
EXPOSE 8080 27017
CMD ["npm", "start"]
And in my app.js to connect with mongoose:
const options = {
autoIndex: false, // Don't build indexes
reconnectTries: 30, // Retry up to 30 times
reconnectInterval: 500, // Reconnect every 500ms
poolSize: 10, // Maintain up to 10 socket connections
// If not connected, return errors immediately rather than waiting for reconnect
bufferMaxEntries: 0
};
// Database connection
const connectWithRetry = () => {
mongoose.connect("mongodb://mongo/expressdb", options)
.then(()=>{
console.log('Database running on port 27017: \x1b[32m%s\x1b[0m', 'online')
})
.catch( (err) => {
console.log('MongoDB connection unsuccessful on port 27017: \x1b[31m%s\x1b[0m', 'offline');
console.log('Retry after 5 seconds.');
setTimeout(connectWithRetry, 5000)
});
}
connectWithRetry();
I want to create a docker-compose file that is able to run on different servers.
For that I have to be able to specify the host-ip or hostname of the server (where all the containers are running) in several places in the docker-compose.yml.
E.g. for a consul container where I want to define how the server can be found by fellow consul containers.
consul:
image: progrium/consul
command: -server -advertise 192.168.1.125 -bootstrap
I don't want to hardcode 192.168.1.125 obviously.
I could use env_file: to specify the hostname or ip and adopt it on every server, so I have that information in one place and use that in docker-compose.yml. But this can only be used to specifiy environment variables and not for the advertise parameter.
Is there a better solution?
docker-compose allows to use environment variables from the environment running the compose command.
See documentation at https://docs.docker.com/compose/compose-file/#variable-substitution
Assuming you can create a wrapper script, like #balver suggested, you can set an environment variable called EXTERNAL_IP that will include the value of $(docker-machine ip).
Example:
#!/bin/sh
export EXTERNAL_IP=$(docker-machine ip)
exec docker-compose $#
and
# docker-compose.yml
version: "2"
services:
consul:
image: consul
environment:
- "EXTERNAL_IP=${EXTERNAL_IP}"
command: agent -server -advertise ${EXTERNAL_IP} -bootstrap
Unfortunately if you are using random port assignment, there is no way to add EXTERNAL_PORT, so the ports must be linked statically.
PS: Something very similar is enabled by default in HashiCorp Nomad, also includes mapped ports. Doc: https://www.nomadproject.io/docs/jobspec/interpreted.html#interpreted_env_vars
I've used docker internal network IP that seems to be static: 172.17.0.1
Is there a better solution?
Absolutely! You don't need the host ip at all for communication between containers. If you link containers in your docker-compose.yaml file, you will have access to a number of environment variables that you can use to discover the ip addresses of your services.
Consider, for example, a docker-compose configuration with two containers: one using consul, and one running some service that needs to talk to consul.
consul:
image: progrium/consul
command: -server -bootstrap
webserver:
image: larsks/mini-httpd
links:
- consul
First, by starting consul with just -server -bootstrap, consul figures out it's own advertise address, for example:
consul_1 | ==> Consul agent running!
consul_1 | Node name: 'f39ba7ef38ef'
consul_1 | Datacenter: 'dc1'
consul_1 | Server: true (bootstrap: true)
consul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
consul_1 | Cluster Addr: 172.17.0.4 (LAN: 8301, WAN: 8302)
consul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
consul_1 | Atlas: <disabled>
In the webserver container, we find the following environment variables available to pid 1:
CONSUL_PORT=udp://172.17.0.4:53
CONSUL_PORT_8300_TCP_START=tcp://172.17.0.4:8300
CONSUL_PORT_8300_TCP_ADDR=172.17.0.4
CONSUL_PORT_8300_TCP_PROTO=tcp
CONSUL_PORT_8300_TCP_PORT_START=8300
CONSUL_PORT_8300_UDP_END=udp://172.17.0.4:8302
CONSUL_PORT_8300_UDP_PORT_END=8302
CONSUL_PORT_53_UDP=udp://172.17.0.4:53
CONSUL_PORT_53_UDP_ADDR=172.17.0.4
CONSUL_PORT_53_UDP_PORT=53
CONSUL_PORT_53_UDP_PROTO=udp
CONSUL_PORT_8300_TCP=tcp://172.17.0.4:8300
CONSUL_PORT_8300_TCP_PORT=8300
CONSUL_PORT_8301_TCP=tcp://172.17.0.4:8301
CONSUL_PORT_8301_TCP_ADDR=172.17.0.4
CONSUL_PORT_8301_TCP_PORT=8301
CONSUL_PORT_8301_TCP_PROTO=tcp
CONSUL_PORT_8301_UDP=udp://172.17.0.4:8301
CONSUL_PORT_8301_UDP_ADDR=172.17.0.4
CONSUL_PORT_8301_UDP_PORT=8301
CONSUL_PORT_8301_UDP_PROTO=udp
CONSUL_PORT_8302_TCP=tcp://172.17.0.4:8302
CONSUL_PORT_8302_TCP_ADDR=172.17.0.4
CONSUL_PORT_8302_TCP_PORT=8302
CONSUL_PORT_8302_TCP_PROTO=tcp
CONSUL_PORT_8302_UDP=udp://172.17.0.4:8302
CONSUL_PORT_8302_UDP_ADDR=172.17.0.4
CONSUL_PORT_8302_UDP_PORT=8302
CONSUL_PORT_8302_UDP_PROTO=udp
CONSUL_PORT_8400_TCP=tcp://172.17.0.4:8400
CONSUL_PORT_8400_TCP_ADDR=172.17.0.4
CONSUL_PORT_8400_TCP_PORT=8400
CONSUL_PORT_8400_TCP_PROTO=tcp
CONSUL_PORT_8500_TCP=tcp://172.17.0.4:8500
CONSUL_PORT_8500_TCP_ADDR=172.17.0.4
CONSUL_PORT_8500_TCP_PORT=8500
CONSUL_PORT_8500_TCP_PROTO=tcp
There is a set of variables for each port EXPOSEd by the consul
image. For example, in that second image, we could interact with the consul REST API by connecting to:
http://${CONSUL_PORT_8500_TCP_ADDR}:8500/
With the new version of Docker Compose (1.4.0) you should be able to do something like this:
docker-compose.yml
consul:
image: progrium/consul
command: -server -advertise HOSTIP -bootstrap
bash
$ sed -e "s/HOSTIP/${HOSTIP}/g" docker-compose.yml | docker-compose --file - up
This is thanks to the new feature:
Compose can now read YAML configuration from standard input, rather than from a file, by specifying - as the filename. This makes it easier to generate configuration dynamically:
$ echo 'redis: {"image": "redis"}' | docker-compose --file - up
Environment variables, as suggested in the earlier solution, are created by Docker when containers are linked. But the env vars are not automatically updated if the container is restarted. So, it is not recommended to use environment variables in production.
Docker, in addition to creating the environment variables, also updates the host entries in /etc/hosts file. In fact, Docker documentation recommends using the host entries from etc/hosts instead of the environment variables.
Reference: https://docs.docker.com/userguide/dockerlinks/
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
extra_hosts works, it's hard coded into docker-compose.yml but for my current static setup, at this moment that's all I need.
version: '3'
services:
my_service:
container_name: my-service
image: my-service:latest
extra_hosts:
- "myhostname:192.168.0.x"
...
networks:
- host
networks:
host:
Create a script to set, every boot, your host IP in an environment variable.
sudo vi /etc/profile.d/docker-external-ip.sh
Then copy inside this code:
export EXTERNAL_IP=$(hostname -I | awk '{print $1}')
Now you can use it in your docker-compose.yml file:
version: '3'
services:
my_service:
container_name: my-service
image: my-service:latest
environment:
- EXTERNAL_IP=${EXTERNAL_IP}
extra_hosts:
- my.external-server.net:${EXTERNAL_IP}
...
environment --> to set as system environment var in your docker
container
extra_hosts --> to add these hosts to your docker container