Running redis on nodejs Docker image - docker

I have a Docker image which is a node.js application. The app retrieves some configuration value from Redis which is running locally. Because of that, I am trying to install and run Redis within the same container inside the Docker image.
How can I extend the Docker file and configure Redis in it?
As of now, the Dockerfile is as below:
FROM node:carbon
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3011
CMD node /app/src/server.js

The best solution would be to use docker compose. With this you would create a redis container, link to it then start your node.js app. First thing would be to install docker compose detailed here - (https://docs.docker.com/compose/install/).
Once you have it up and running, You should create a docker-compose.yml in the same folder as your app's dockerfile. It should contain the following
version: '3'
services:
myapp:
build: .
ports:
- "3011:3011"
links:
- redis:redis
redis:
image: "redis:alpine"
Then redis will be accessible from your node.js app but instead of localhost:6379 you would use redis:6379 to access the redis instance.
To start your app you would run docker-compose up, in your terminal. Best practice would be to use a network instead of links but this was made for simplicity.
This can also be done as desired, having both redis and node.js on the same image, the following Dockerfile should work, it is based off what is in the question:
FROM node:carbon
RUN wget http://download.redis.io/redis-stable.tar.gz && \
tar xvzf redis-stable.tar.gz && \
cd redis-stable && \
make && \
mv src/redis-server /usr/bin/ && \
cd .. && \
rm -r redis-stable && \
npm install -g concurrently
EXPOSE 6379
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3011
EXPOSE 6379
CMD concurrently "/usr/bin/redis-server --bind '0.0.0.0'" "sleep 5s; node /app/src/server.js"
This second method is really bad practice and I have used concurrently instead of supervisor or similar tool for simplicity. The sleep in the CMD is to allow redis to start before the app is actually launched, you should adjust it to what suits you best. Hope this helps and that you use the first method as it is much better practice

My use case was to add redis server in alpine tomcat flavour:
So this worked:
FROM tomcat:8.5.40-alpine
RUN apk add --no-cache redis
RUN apk add --no-cache screen
EXPOSE 6379
EXPOSE 3011
## Run Tomcat
CMD screen -d -m -S Redis /usr/bin/redis-server --bind '0.0.0.0' && \
${CATALINA_HOME}/bin/catalina.sh run
EXPOSE 8080

If you are looking for a bare minimum docker with nodejs and redis-server, this works :
FROM nikolaik/python-nodejs:python3.5-nodejs8
RUN apt-get update
apt-get -y install redis-server
COPY . /app
WORKDIR /app
nohup redis-server &> redis.log &
and then you can have further steps for your node application.

Related

Docker ENTRYPOINT not run two commands

I have a docker-compose.yml with two services, Grafana and Ubuntu. I'm trying to run Prometheus and node_exporter commands in Ubuntu container through entrypoint but only works for the first command.
Dockerfile:
FROM ubuntu:20.04
ENV PROMETHEUS_VERISION=2.38.0
ENV NODE_EXPORTER_VERISION=1.4.0
RUN apt update -y && apt upgrade -y
RUN apt install -y wget
WORKDIR /
# Install Prometheus
RUN wget https://github.com/prometheus/prometheus/releases/downloa/v$PROMETHEUS_VERISION/prometheus-$PROMETHEUS_VERISION.linux-amd64.tar.gz && \
tar xvfz prometheus-$PROMETHEUS_VERISION.linux-amd64.tar.gz
ADD cstm_prometheus.yml /prometheus-$PROMETHEUS_VERISION.linux-amd64/cstm_prometheus.yml
EXPOSE 9090
# Install Node Exporter
RUN wget https://github.com/prometheus/node_exporter/releases/download/v$NODE_EXPORTER_VERISION/node_exporter-$NODE_EXPORTER_VERISION.linux-amd64.tar.gz && \
tar xvfz node_exporter-$NODE_EXPORTER_VERISION.linux-amd64.tar.gz
EXPOSE 9100
COPY ./cstm_entrypoint.sh /
RUN ["chmod", "+x", "/cstm_entrypoint.sh"]
ENTRYPOINT ["/cstm_entrypoint.sh"]
cstm_entrypoint.sh:
#!/bin/bash
./prometheus-$PROMETHEUS_VERISION.linux-amd64/prometheus --config.file=/prometheus-$PROMETHEUS_VERISION.linux-amd64/cstm_prometheus.yml
./node_exporter-$NODE_EXPORTER_VERISION.linux-amd64/node_exporter
When check the services on web browser i have access to:
grafana: 0.0.0.0:3000
prometheus: 0.0.0.0:9090
but not for node_exporter on 0.0.0.0:9100
Anybody could help me please?
Thanks in advance.
Your script waits for Prometheus to finish before it starts node_exporter. Try adding a & at the end of the Prometheus command to have it detach from the shell. Then the script will continue and run the node_exporter command. Like this
#!/bin/bash
./prometheus-$PROMETHEUS_VERISION.linux-amd64/prometheus --config.file=/prometheus-$PROMETHEUS_VERISION.linux-amd64/cstm_prometheus.yml &
./node_exporter-$NODE_EXPORTER_VERISION.linux-amd64/node_exporter

File created in image by docker not reflecting in container run by docker compose

I have a docker file which has a command RUN python3 manage.py dumpdata --natural-foreign --exclude=auth.permission --exclude=contenttypes --indent=4 > data.json" this creates a Json file.
when i build the docker file it creates an image of specific name and when i run that using below command and open in bash i am able to see the data.json file created.
docker run -it --rm vijeth11/fassionplaza bash
files in Docker container created via above cmd
when i use the same image and run docker compose run web bash cmd
i am not able to see the data.json file, while other files are present in the container.
files in Docker container created via Docker compose
Is there anything wrong in my docker commands
Command used to build:
docker build --no-cache -t vijeth11/fassionplaza .
Docker-compose.yml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_DB=fashionplaza
ports:
- "5432:5432"
web:
image: vijeth11/fassionplaza
command: >
sh -c "ls -l && python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py loaddata data.json && gunicorn --bind :8000 --workers 3 FashionPlaza.wsgi"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY ./Backend /code/Backend
COPY ./frontEnd /code/frontEnd
WORKDIR /code/Backend
RUN pip3 install -r requirements.txt
WORKDIR /code/Backend/FashionPlaza
RUN python3 manage.py dumpdata --natural-foreign \
--exclude=auth.permission --exclude=contenttypes \
--indent=4 > data.json
RUN chmod 755 data.json
WORKDIR /code/frontEnd/FashionPlaza
RUN apt-get update -y
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt install nodejs -y
RUN npm i
RUN npm run prod
ARG buildtime_variable=PROD
ENV server_type=$buildtime_variable
WORKDIR /code/Backend/FashionPlaza
Thank you in advance.
You map your current directory to /code when you run with these lines in your docker-compose file
volumes:
- .:/code
That hides all existing files in /code and replaces it with the mapped directory.
Since your data.json file is located in /code/Backend/FashionPlaza in the image, it becomes hidden and inaccessible.
The best thing to do is to map your volumes to empty directories in the image, so you don't inadvertently hide anything.

Cant connect to rails docker container on localhost

Im having trouble accessing my containerized rails app from my local machine. I'm following this quickstart guide as a template and made some tweaks to the paths for my gemfile and gemfile.lock. The quickstart guide moves on to docker-compose, but I want to try accessing the app without it first to get familiar with these processes before moving on.
This is my dockerfile:
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile ./Gemfile
COPY Gemfile.lock ./Gemfile.lock
RUN gem install bundler -v 2.0.1
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000:3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
and this is the entrypoint file:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
I am able to successfully build and run the image, but when I try to access 0.0.0.0:3000 I get a cant connect error.
I also attached a screenshot of my app directory structure, the Dockerfile and entrypoint are at the root.
One thing that seems strange is when I try to run the logs for the container I dont get any output, but when I shut the container down I see the startup logs. Not sure why that is.
I am running docker desktop 2.1.0.3. Any thoughts/help are very appreciated.
use just EXPOSE 3000 in dockerfile.
run container ror with mapping port to localhost from your new docker image <image>
docker run -d --name ror -p 3000:3000 <image>
now you should be able to access localhost:3000
Here's an example of mine that works:
The usual dockerfile, nothing special here.
Then, in docker-compose.yml, add environment variable, or place in .env file, the DATABASE_URL (important bit is using the host.docker.internal instead of localhost
Then in your database.yml, specify the url with the ENV key
Then start the container by running docker-compose up
#Dockerfile
FROM ruby:3.0.5-alpine
RUN apk add --update --no-cache \
bash \
build-base \
tzdata \
postgresql-dev \
yarn \
git \
curl \
wget \
gcompat
COPY Gemfile Gemfile.lock ./
WORKDIR /app
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash
RUN gem install bundler:2.4.3
RUN bundle lock --add-platform x86_64-linux
RUN bundle install
COPY package.json yarn.lock ./
RUN yarn install
COPY . .
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0", "--pid=/tmp/server.pid"]
#docker-compose.yml
version: "3.9"
services:
app:
image: your_app_name
volumes:
- /app
env_file:
- .env
environment:
- DATABASE_URL=postgresql://postgres#host.docker.internal:5432/<your_db_name>
ports:
- "3000:3000"
webpack_dev_server:
build: .
command: bin/webpack-dev-server
ports:
- "3035:3035"
volumes:
- /app
env_file:
- .env
environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
redis:
image: redis
#database.yml
development:
<<: *default
database: giglifepro_development
url: <%= ENV.fetch('DATABASE_URL') %>

Duplication in Dockerfiles

I have a Django Web-Application that uses celery in the background for periodic tasks.
Right now I have three docker images
one for the django application
one for celery workers
one for the celery scheduler
whose Dockerfiles all look like this:
FROM alpine:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY Pipfile Pipfile.lock ./
RUN apk update && \
apk add python3 postgresql-libs jpeg-dev git && \
apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \
pip3 install --no-cache-dir pipenv && \
pipenv install --system && \
apk --purge del .build-deps
COPY . ./
# Run the image as a non-root user
RUN adduser -D noroot
USER noroot
EXPOSE $PORT
CMD <Different CMD for all three containers>
So they are all exactly the same except the last line.
Would it make sense here to create some kind of base image that contains everything except CMD. And all three images use that as base and add only their respective CMD?
Or won't that give me any advantages, because everything is cached anyway?
Is a seperation like you see above reasonable?
Two small bonus questions:
Sometimes the apk update .. layer is cached by docker. How does docker know that there are no updates here?
I often read that I should decrease layers as far a possible to reduce image size. But isn't that against the caching idea and will result in longer builds?
I will suggest to use one Dockerfile and just update your CMD during runtime. Litle bit modification will work for both local and Heroku as well.
As far Heroku is concern they provide environment variable to start container with the environment variable.
heroku set-up-your-local-environment-variables
FROM alpine:3.7
ENV PYTHONUNBUFFERED 1
ENV APPLICATION_TO_RUN=default_application
RUN mkdir /code
WORKDIR /code
COPY Pipfile Pipfile.lock ./
RUN apk update && \
apk add python3 postgresql-libs jpeg-dev git && \
apk add --virtual .build-deps gcc python3-dev musl-dev postgresql-dev zlib-dev && \
pip3 install --no-cache-dir pipenv && \
pipenv install --system && \
apk --purge del .build-deps
COPY . ./
# Run the image as a non-root user
RUN adduser -D noroot
USER noroot
EXPOSE $PORT
CMD $APPLICATION_TO_RUN
So When run you container pass your application name to run command.
docker run -it --name test -e APPLICATION_TO_RUN="celery beat" --rm test
I would recommend looking at docker-compose to simplify management of multiple containers.
Use a single Dockerfile like the one you posted above, then create a docker-compose.yml that might look something like this:
version: '3'
services:
# a django service serving an application on port 80
django:
build: .
command: python manage.py runserver
ports:
- 8000:80
# the celery worker
worker:
build: .
command: celery worker
# the celery scheduler
scheduler:
build: .
command: celery beat
Of course, modify the commands here to be whatever you are using for your currently separate Dockerfiles.
When you want to rebuild the image, docker-compose build will rebuild your container image from your Dockerfile for the first service, then reuse the built image for the other services (because they already exist in the cache). docker-compose up will spin up 3 instances of your container image, but overriding the run command each time.
If you want to get more sophisticated, there are plenty of resources out there for the very common combination of django and celery.

issue with exposing ports using docker compose

docker run -it -p 3000:3000 -v $(pwd):/src budotemplate_app node server.js works but docker-compse run app node server.js doesn't show anything in the browser. any ideas?
https://github.com/oren/budo-template/blob/af0681a3b8af4d6f4ca16d4a371f775261986476/docker-compose.yml
docker-compose.yml
app:
build: .
volumes:
- .:/src
ports:
- "3000:3000"
expose:
- "3000"
Dockerfile
FROM alpine:edge
RUN echo "http://dl-4.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk update
RUN apk add --update iojs && rm -rf /var/cache/apk/*
WORKDIR /src
COPY . /src
EXPOSE 3000
CMD ["node"]
run command in docker-compose is different than docker.
If you want the ports to be exposed you have to use --service-ports.
This is the complete command: docker-compse run --service-ports app node server.js

Resources