How to connect a socket with Socket.io in a Docker container? - docker

I have a tiny socket server in a docker container the server looks like
var app = require('express')();
var server = require('http').Server(app);
var io = require('socket.io')(server, {origins: 'localhost:*'});
io.on('connection', function (socket) {
console.log('Connected');
});
const PORT = 8081;
const HOST = '0.0.0.0';
server.listen(PORT, HOST);
and the docker file is
FROM keymetrics/pm2-docker-alpine:latest
WORKDIR /root
RUN apk update && \
apk upgrade && \
apk add git
ENV HOME /root
COPY socket.js ./
COPY package.json ./
RUN npm install
COPY pm2.json ./
EXPOSE 8081
CMD [ "pm2-docker", "start", "pm2.json" ]
pm2.json looks like
{
"apps": [{
"name": "socket-server",
"script": "socket.js",
"exec_mode" : "cluster",
"instances" : 2,
"env": {
"production": true
}
}]
}
package.json
{
"name": "socket-server",
"version": "1.0.0",
"description": "",
"main": "socket.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.15.3",
"socket.io": "^2.0.3"
}
}
It all runs just fine with
docker run -d -p 8081:8081 socket-server
until I try to connect to it from a website running in another container, the website connects like this...
<script src="socket.io.js"></script>
<script>
var socket = io.connect('http://localhost:8081');
socket.on('connect', function(data) {
console.log('Connected Client')
});
</script>
and in the console, it shows that it polls just fine with
Request URL:http://localhost:8081/socket.io/?
EIO=3&transport=polling&t=LthQCgI&sid=93sOyTiSOe5RVOdEAAAL
Request Method:POST
Status Code:200 OK
but fails to get a socket connection
Request URL:ws://localhost:8081/socket.io/?
EIO=3&transport=websocket&sid=93sOyTiSOe5RVOdEAAAL
Request Method:GET
Status Code:400 Bad Request
Now if I run the socket server, not in the docker container it's fine and the socket connects.
I have tried getting the IP of the container that the socket server is running and using that in the connection script but even the polling doesn't work when I configure it like that.
I really need this inside a Docker container.
Any help is most appreciated

Although this is an old question I figured I would elaborate a little bit on it for anyone else that is wondering how you would go about connecting containers since the previous answer was a bit slim.
Using swarm in this case would be overkill especially for something like running the containers locally in a manner that would allow them to talk. Instead you simply want to establish the containers on the same docker network.
version: '3.5'
services:
app:
build:
context: .
dockerfile: Dockerfile
command: app
ports:
- "4000:4000"
volumes:
- .:/app
networks:
- app-network
pgsql:
image: postgres:latest
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
postgres_data:
driver: local
In the docker-compose.yml file example above you can see that I am defining a network via:
networks:
app-network:
driver: bridge
Then on both the app and pgsql containers I am assigning them to that network via:
networks:
- app-network
This allows me to access the containers from one another via the container "name". So in my code I am now able to use pgsql:5432 and communicate with the postgres service. The app container is also reachable from the pgsql container via app:4000.
While this can get much more complex than the above example I figured I leave a working docker-compose.yml example above. You can find out more about docker networks at https://docs.docker.com/compose/networking/

maybe you should try to make a docker swarm and let the containers join the same network:
....
version: '3.5'
services:
myserver:
image: 'mydocker-image'
networks:
- mynetwork
....
and access the server like this http://myserver:8081

Related

POSTMAN in Docker GET error (connect ECONNREFUSED)

I have a UI running on 127.0.0.1:80 through a docker container. I tested this via Chrome and Postman Client app which works well.
When I try to export GET test case from Postman Client app and running it through a separate container, I receive this error:
GET 127.0.0.1:80 [errored]
connect ECONNREFUSED 127.0.0.1:80
here is my docker compose file:
version: "3.7"
services:
web:
build: ui
ports:
- 80:80
depends_on:
- api
api:
build: app
environment:
- PORT=80
ports:
- 8020:80
test:
container_name: restful_booker_checks
build: test
image: postman_checks
depends_on:
- api
- web
here is my docker file for test cases:
FROM node:10-alpine3.11
RUN apk update && apk upgrade && \
apk --no-cache --update add gcc musl-dev ca-certificates curl && \
rm -rf /var/cache/apk/*
RUN npm install -g newman
COPY ./ /test
WORKDIR /test
CMD ["newman", "run", "OCR_POC_v2_tests.json"]
here is the exported json file from Postman Client app:
{
"info": {
"_postman_id": "5e702694-fa82-4b9f-8b4e-49afd11330cc",
"name": "OCR_POC_v2_tests",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json",
"_exporter_id": "21096887"
},
"item": [
{
"name": "get",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "127.0.0.1:80",
"host": [
"127",
"0",
"0",
"1"
],
"port": "80"
}
},
"response": []
}
]
}
When I run the following command, it also works well:
newman run OCR_POC_v2_tests.json
So it's just in docker which the connection can not get through.
docker-compose creates a bridge network that it connects the containers to. Each container can be addressed by it's service name on the network.
You're trying to connect to 127.0.0.1 or localhost, which in a docker context is the testcontainer itself.
You need to connect to web instead, so your URL becomes http://web:80/. I don't know how exactly you need to change your exported Postman file to accomplish that, but hopefully you can figure it out.
Also note that on the bridge network you connect using the container ports. I.e. in your case, that would be port 80 on both web and api. If you only need to access the containers from other containers on the docker network, you don't need to map the ports.

Nodemon in docker doesn't work, also --legacy-watch -L are not working

I tried to find solution over a long time - reloading nodemon in docker while updating e.g index.js. I've windows 10.
I've node project with docker:
proj/backend/src/index.js:
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('Hello world.')
})
const port = process.env.PORT || 3001
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
})
proj/backend/package.json:
{
"scripts": {
"start": "node ./bin/www",
"start:legacy": "nodemon --legacy-watch -L --watch src src/index.js"
},
"dependencies": {
"express": "^4.17.2"
},
"devDependencies": {
"nodemon": "^2.0.15"
}
}
proj/backend/dev.Dockerfile:
FROM node:lts-alpine
RUN npm install --global nodemon
WORKDIR /usr/src/app
COPY . .
RUN npm ci
EXPOSE 3001
ENV DEBUG=playground:*
CMD npm run start:legacy
proj/docker-compose.dev.yml:
version: '3.8'
services:
backend:
image: backend-img
build:
context: ./backend
dockerfile: ./dev.Dockerfile
ports:
- 3001:3001
environment:
- PORT=3001
If I am not wrong, the docker container is made for kill itself when the process ends. When using nodemon (and updating the code), the process will be stopped and restarted and the container will stop. You could do the npm start not be the main process, but this is not a good practice.
Probably, it's already late. But I will write.
There are misconceptions in your configuration.
In command "start:legacy". You should use only one option to run legacy --legacy-watch or -L, not both. Because these commands are equal. According to nodemon docs: Via the CLI, use either --legacy-watch or -L for short
Your Dockerfile configurations seems fine. But to synchronize your local machine files and directory with docker container you should use volumes in docker-compose. So, your docker-compose file will look something like:
version: '3.8'
services:
backend:
image: backend-img
build:
context: ./backend
dockerfile: ./dev.Dockerfile
volumes:
- ./your_project_dir:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3001:3001
environment:
- PORT=3001
I believe, if you define volumes you will able to make changes locally and container also will see changes

Connect to Redis Docker container from Vagrant machine

We're making move to Docker from Vagrant.
Our first aim is to move some services out first. In this case I'm trying to host a redis server on a docker container and connect to it from my vagrant machine.
On the vagrant machine there is an apache2 webserver hosting a Laravel App
It's the connection part I'm struggling with, currently I have
Dockerfile.redis
FROM redis:3.2.12
RUN redis-server
docker-compose.yml (concatenated)
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
I've tried various way to connect to this:
Attempt 1
Using the host ip 10.0.2.2 in the config in Laravel. Results in a "Connection refused"
Attempt 2
Set up a network in the docker compose
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
network:
- app_net:
ipv4_address: 172.16.238.10
ports:
- "6379:6379"
networks:
app_net:
driver: bridge
ipam:
driver: default
- subnet: 172.16.238.0/24
This instead results in timeouts. Most solutions seem to require a gateway configured on the network, but this isn't configurable in docker compose 3. Is there maybe a way around this?
If anyone can give any guidance that would be great, most guides talk about connect to dockers in a vagrant rather than from one.
FYI - this is using Docker for Mac and version 3 of docker compose
We were able to get this going use purely docker compose and not having a dockerfile for redis at all:
redis:
image: redis
container_name: redis
working_dir: /opt
ports:
- "6379:6379"
Once done like this, able to connect to redis from within the vagrant file using
redis-cli -h 10.0.2.2
Or as the following in laravel, although we're using environment variables to set these)
'redis' => [
'client' => 'phpredis',
'default' => [
'host' => '10.0.2.2',
'password' => null,
'port' => 6379,
'database' => 0,
]
]
Your Attempt 1 should work actually. When you create a service without defining a network, docker-compose automatically creates a bridge network. For example:
When you run docker-compose up on this:
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
docker-compose creates a bridge network named <project name>_default, which is docker_compose_test_default in my case, as shown below:
me#myshell:~/docker_compose_test $ docker network ls
NETWORK ID NAME DRIVER SCOPE
6748b1ea4b85 bridge bridge local
4601c6ea30c3 docker_compose_test_default bridge local
80033acaa6e4 host host local
When you inspect your container, you can see that an IP has already been assigned to it:
docker inspect e6b196f952af
...
"Networks": {
"bridge": {
...
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
You can then use this IP to connect from the host or your vagrant box:
me#myshell:~/docker_compose_test $ redis-cli -h 172.18.0.2 -p 6379
172.18.0.2:6379> ping
PONG

Use docker container as SSH tunnel inside network

I am trying to use a docker container to set up a SSH tunnel to a remote database that is only reachable via SSH. I have a docker network with several containers and want to make the database available for all the containers in the network.
The Dockerfile for the SSH container looks like this:
FROM debian:stable
RUN apt-get update && apt-get -y --force-yes install openssh-client autossh postgresql-client
COPY .ssh /root/.ssh
RUN chown root:root /root/.ssh/config
EXPOSE 12345
ENTRYPOINT ["/usr/bin/autossh", "-M", "0", "-v", "-T", "-N", "-4", "-L", "12345:localhost:1234", "user#remotedb" ]
Inside the .ssh diretctory are my keys and the config file, which looks like that:
Host remotedb
StrictHostKeyChecking no
ServerAliveInterval 30
ServerAliveCountMax 3
The tunnel itself works on this container, meaning I can access the db from inside it as localhost:12345.
Now I want to access it also from other containers in the same network.
My docker-compose.yml looks like this (I commented out some trials):
version: '2'
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 10.12.0.0/16
gateway: 10.12.0.1
services:
service_1:
image: my/image:alias
volumes:
- somevolume
# links:
# - my_ssh
ports:
- "8080"
environment:
ENV1: blabla
networks:
my_network:
ipv4_address: 10.12.0.12
my_ssh:
build:
context: ./dir_with_Dockerfile
# ports:
# - "23456:12345"
expose:
- "12345"
networks:
my_network:
ipv4_address: 10.12.0.13
I've tried to access the remote database from inside service_1 with hostnames 'my_ssh', the ipv4_address, 'localhost', and with ports 12345 and 23456. None of these combinations have worked. Where could I go wrong?
Or how else could I achieve a permanent connection from my containers to the remote database?
More of a suggestion than an answer; setting up OpenVPN on your database network and your docker swarm would allow you to connect the two networks together. It would also make it easier for you to configure more hosts in the future.

Could not call another container using its name as host in docker-compose

I have two docker containers. One is java server, running on 8080 port with rest API /drivers. Another container is simple nodejs server with index.html page where ajax call is being performed to save new driver.
URL in js file is: const URL = "http://storage:8080/drivers";
When I run them using just docker and created for them network, communication between them works fine. But when I run both containers using docker-compose, then I get status "(failed) net::ERR_NAME_NOT_RESOLVED"
When I open bash of any of this containers and run 'ping storage', I normally receive packets.
What am I missing?
DockerFile for java server:
FROM java:8
VOLUME /tmp
ADD target/docker-project-1.0-SNAPSHOT.jar app.jar
EXPOSE 8080
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
DockerFile for nodejs server:
FROM node:argon
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install connect serve-static
COPY . /usr/src/app
EXPOSE 8081
CMD ["node", "server.js"]
docker-compose.yml file:
version: '2.1'
services:
client:
image: glasierr/js-client
networks:
default:
aliases:
- "client"
links:
- "storage"
ports:
- "8081:8081"
storage:
image: glasierr/drivers-storage
networks:
default:
aliases:
- "storage"
ports:
- "8080:8080"
expose:
- "8080"
JS script:
const URL = "http://storage:8080/drivers";
$.ajax({
headers: {
'Content-Type': 'application/json'
},
type: "POST",
url: URL,
data: JSON.stringify({
licenceId: licenceId,
name: name,
surname: surname,
email: email
}),
dataType: "json"
});
I cannot see any mistake and tried it and it worked with compose version 2, maybe it is not working with experimental.
But you can simplify your compose file to that
version: '2'
services:
client:
image: glasierr/js-client
ports:
- 8081:8081
storage:
image: glasierr/drivers-storage
ports:
- 8080:8080
All containers are automatically in the default overlay network and are reachable with their service name (and your alias is the same), so you can remove networks. links does the same as the default network and depends_on and is not needed in this example. You also only need expose if you don't publish 8080.

Resources