cannot connect node container to redis container - docker

I'm trying to dockerize an existing app using Angular for the frontend, node.js as a backend and Postgres as DB. I've created a network and tried to run the containers one by one without the DB for now but I get an error with the node.js backend.
I've built the backend image with this Dockerfile:
FROM node:10.17.0-alpine
WORKDIR /app
COPY package*.json ./
RUN npm i
COPY . .
EXPOSE 3000
CMD node --experimental-worker backend.js
and the frontend one with this:
FROM node:10.17
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -yq google-chrome-stable
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install
RUN npm install -g #angular/cli#7.3.9
COPY . /app
EXPOSE 4200
CMD ng serve --host 0.0.0.0
I've built the images and I've started the containers with:
docker container run --network my-network -it -p 4200:4200 frontend
docker container run --network my-network -it -p 3000:3000 backend
docker container run --network my-network -it --name my-redis -p 6379:6379 redis
but the backend relies on redis to start so it fails with the error:
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
I've tried:
docker container run --network my-network --link my-redis:redis -it -p 3000:3000
but it doesn't help at all. I'm sorry but I am very new to Docker so any clarification would be useful.

Your backend service is trying to connect to 127.0.0.1:6379, however it should be like my-redis:6379.
Basically, you need to inject the redis host to your backend service. There is multiple ways to do so, but the most common way is to read it from ENV variable (e.g. REDIS_HOST=my-redis)

Related

Accessing Python WebApp in Docker Connection Refused

I have a web app in python3. I'am trying to access the web app from another Machine on my network (cloud environment). The web app works.
The Dockerfile is as follows:
FROM python:3.8.0-slim as builder
RUN apt-get update \
&& apt-get install gcc -y \
&& apt-get clean
COPY app/requirements.txt /app/requirements.txt
WORKDIR app
RUN pip install --upgrade pip && pip3 install --user -r requirements.txt
COPY ./app /app
# Production image
FROM python:3.8.0-slim as app
COPY --from=builder /root/.local /root/.local
COPY --from=builder /app /app
WORKDIR app
ENV PATH=/root/.local/bin:$PATH
EXPOSE 5000
ENTRYPOINT gunicorn main:app -b 0.0.0.0:5000
$docker build -t simu .
$docker run --name simu --detach -p 5000:5000 -it simu:v1
11a8e9e2f7f2d70d39cac2b3e2a14c25ae2bf9536db005dee1d473aa588139ae
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
11a8e9e2f7f2 simu:v1 "/bin/sh -c 'gunicor…" 3 seconds ago Up 3 seconds 0.0.0.0:5000->5000/tcp simu
$curl -X POST {} http://localhost:5000
curl: (3) <url> malformed
curl: (56) Recv failure: Connection reset by peer
Why I receive that error?
UPDATE
If I run the container as
/usr/bin/docker run --name simu --detach --network host -e REGISTRAR_URL=http://localhost:8500 -it simu:v1
The service is accessible from the same machine:
curl -X POST {} http://localhost:5000
curl: (3) <url> malformed
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>405 Method Not Allowed</title>
<h1>Method Not Allowed</h1>
<p>The method is not allowed for the requested URL.</p>
But it is not from another machine on the same LAN:
$curl -X POST http://algo:5000/generate-prod
curl: (7) Failed connect to algo:5000; Connection refused
I have stopped firewalld also.
If I run the application without the container everything works, so I think there must be something in the neworking with docker which is not set correctly.
You expose the port but you do not publish the port. Check this post on the difference. You need to add -p 5000:5000 to the docker run command.

Running redis on nodejs Docker image

I have a Docker image which is a node.js application. The app retrieves some configuration value from Redis which is running locally. Because of that, I am trying to install and run Redis within the same container inside the Docker image.
How can I extend the Docker file and configure Redis in it?
As of now, the Dockerfile is as below:
FROM node:carbon
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3011
CMD node /app/src/server.js
The best solution would be to use docker compose. With this you would create a redis container, link to it then start your node.js app. First thing would be to install docker compose detailed here - (https://docs.docker.com/compose/install/).
Once you have it up and running, You should create a docker-compose.yml in the same folder as your app's dockerfile. It should contain the following
version: '3'
services:
myapp:
build: .
ports:
- "3011:3011"
links:
- redis:redis
redis:
image: "redis:alpine"
Then redis will be accessible from your node.js app but instead of localhost:6379 you would use redis:6379 to access the redis instance.
To start your app you would run docker-compose up, in your terminal. Best practice would be to use a network instead of links but this was made for simplicity.
This can also be done as desired, having both redis and node.js on the same image, the following Dockerfile should work, it is based off what is in the question:
FROM node:carbon
RUN wget http://download.redis.io/redis-stable.tar.gz && \
tar xvzf redis-stable.tar.gz && \
cd redis-stable && \
make && \
mv src/redis-server /usr/bin/ && \
cd .. && \
rm -r redis-stable && \
npm install -g concurrently
EXPOSE 6379
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3011
EXPOSE 6379
CMD concurrently "/usr/bin/redis-server --bind '0.0.0.0'" "sleep 5s; node /app/src/server.js"
This second method is really bad practice and I have used concurrently instead of supervisor or similar tool for simplicity. The sleep in the CMD is to allow redis to start before the app is actually launched, you should adjust it to what suits you best. Hope this helps and that you use the first method as it is much better practice
My use case was to add redis server in alpine tomcat flavour:
So this worked:
FROM tomcat:8.5.40-alpine
RUN apk add --no-cache redis
RUN apk add --no-cache screen
EXPOSE 6379
EXPOSE 3011
## Run Tomcat
CMD screen -d -m -S Redis /usr/bin/redis-server --bind '0.0.0.0' && \
${CATALINA_HOME}/bin/catalina.sh run
EXPOSE 8080
If you are looking for a bare minimum docker with nodejs and redis-server, this works :
FROM nikolaik/python-nodejs:python3.5-nodejs8
RUN apt-get update
apt-get -y install redis-server
COPY . /app
WORKDIR /app
nohup redis-server &> redis.log &
and then you can have further steps for your node application.

Not being able to access webapp from host in Docker

I have a simple webproject which I want to "Dockerize" but I keep failing at exposing the webapp to host.
My Dockerfile looks like:
FROM debian:jessie
RUN apt-get update -y && \
apt-get install -y python-pip python-dev curl && \
pip install --upgrade pip setuptools
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
WORKDIR /app/web
And requirements.txt looks like:
PasteScript==2.0.2
Pylons==1.0.2
The web directory was built using:
paster create --template=pylons web
And finally start_server.sh:
#!/bin/bash
paster serve --daemon development.ini start
Now I am able to build with :
docker build -t webapp .
And then run command:
docker run -it -p 5000:5000 --name app webapp:latest /bin/bash
And then inside the docker container I run:
bash start_server.sh
which successfully starts the webapp on port 5000 and if I curl inside docker container I get expected response. Also the container is up and running with the correct port mappings:
bc6511d584ae webapp:latest "/bin/bash" 2 minutes ago Up 2 minutes 0.0.0.0:5000->5000/tcp app
Now if I run docker port app it looks fine:
5000/tcp -> 0.0.0.0:5000
However I cannot get any response from server from host with :
curl localhost:5000
I have probably misunderstood something here but it seems fine to me.
In your dockerfile you need to add EXPOSE 5000 your port mapping is correct think of it as opening the port on your container and then you map it with localhost with the -p
Answer in the comment
when you make_server bind to 0.0.0.0 instead of localhost

Docker-machine Port Forwarding on Windows not working

I'm attempting to access my django app running within Docker on my windows machine. I'm using docker-machine. I've been taking a crack at this for hours now.
Here's my Dockerfile for my django app:
FROM python:3.4-slim
RUN apt-get update && apt-get install -y \
gcc \
gettext \
vim \
curl \
postgresql-client libpq-dev \
--no-install-recommends && rm -rf /var/lib/apt/lists/*
EXPOSE 8000
WORKDIR /home/
# add app files from git repo
ADD . server/
WORKDIR /home/server
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver", "8000"]
So that should be exposing (at least in the container) port 8000.
When I use the command docker-machine ip default I am given the IP 192.168.99.101. I go to that IP on port 8000 but get no response.
I went into the VirtualBox to see if forwarding those ports would work. Here is the configuration:
I also tried using 127.0.0.1 as the Host IP. I also tried disabling the windows firewall.
Here's my command for starting the container:
docker run --rm -it -p 8000:8000 <imagename>
I am at a loss on why I am unable to connect on that port. When I run docker-machine ls the url it gives me is tcp://192.168.99.101:2376 and when I go to that it gives me some kind of file back, so I know the docker-machine is active on that port.
Also when I run docker ps I get this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c00cc28a2bd <image name> "python manage.py run" 7 minutes ago Up 7 minutes 0.0.0.0:8000->8000/tcp drunk_knuth
Any help would be greatly appreciated.
The issue was that the server was running on 127.0.0.1 when it should have been running on 0.0.0.0.
I changed the CMD line in the Dockerfile from
CMD ["python", "manage.py", "runserver", "8000"]
to
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
and it now works.

issue with exposing ports using docker compose

docker run -it -p 3000:3000 -v $(pwd):/src budotemplate_app node server.js works but docker-compse run app node server.js doesn't show anything in the browser. any ideas?
https://github.com/oren/budo-template/blob/af0681a3b8af4d6f4ca16d4a371f775261986476/docker-compose.yml
docker-compose.yml
app:
build: .
volumes:
- .:/src
ports:
- "3000:3000"
expose:
- "3000"
Dockerfile
FROM alpine:edge
RUN echo "http://dl-4.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk update
RUN apk add --update iojs && rm -rf /var/cache/apk/*
WORKDIR /src
COPY . /src
EXPOSE 3000
CMD ["node"]
run command in docker-compose is different than docker.
If you want the ports to be exposed you have to use --service-ports.
This is the complete command: docker-compse run --service-ports app node server.js

Resources