docker run -it -p 3000:3000 -v $(pwd):/src budotemplate_app node server.js works but docker-compse run app node server.js doesn't show anything in the browser. any ideas?
https://github.com/oren/budo-template/blob/af0681a3b8af4d6f4ca16d4a371f775261986476/docker-compose.yml
docker-compose.yml
app:
build: .
volumes:
- .:/src
ports:
- "3000:3000"
expose:
- "3000"
Dockerfile
FROM alpine:edge
RUN echo "http://dl-4.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk update
RUN apk add --update iojs && rm -rf /var/cache/apk/*
WORKDIR /src
COPY . /src
EXPOSE 3000
CMD ["node"]
run command in docker-compose is different than docker.
If you want the ports to be exposed you have to use --service-ports.
This is the complete command: docker-compse run --service-ports app node server.js
Related
I have a docker file which has a command RUN python3 manage.py dumpdata --natural-foreign --exclude=auth.permission --exclude=contenttypes --indent=4 > data.json" this creates a Json file.
when i build the docker file it creates an image of specific name and when i run that using below command and open in bash i am able to see the data.json file created.
docker run -it --rm vijeth11/fassionplaza bash
files in Docker container created via above cmd
when i use the same image and run docker compose run web bash cmd
i am not able to see the data.json file, while other files are present in the container.
files in Docker container created via Docker compose
Is there anything wrong in my docker commands
Command used to build:
docker build --no-cache -t vijeth11/fassionplaza .
Docker-compose.yml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_DB=fashionplaza
ports:
- "5432:5432"
web:
image: vijeth11/fassionplaza
command: >
sh -c "ls -l && python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py loaddata data.json && gunicorn --bind :8000 --workers 3 FashionPlaza.wsgi"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY ./Backend /code/Backend
COPY ./frontEnd /code/frontEnd
WORKDIR /code/Backend
RUN pip3 install -r requirements.txt
WORKDIR /code/Backend/FashionPlaza
RUN python3 manage.py dumpdata --natural-foreign \
--exclude=auth.permission --exclude=contenttypes \
--indent=4 > data.json
RUN chmod 755 data.json
WORKDIR /code/frontEnd/FashionPlaza
RUN apt-get update -y
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt install nodejs -y
RUN npm i
RUN npm run prod
ARG buildtime_variable=PROD
ENV server_type=$buildtime_variable
WORKDIR /code/Backend/FashionPlaza
Thank you in advance.
You map your current directory to /code when you run with these lines in your docker-compose file
volumes:
- .:/code
That hides all existing files in /code and replaces it with the mapped directory.
Since your data.json file is located in /code/Backend/FashionPlaza in the image, it becomes hidden and inaccessible.
The best thing to do is to map your volumes to empty directories in the image, so you don't inadvertently hide anything.
I'm trying to dockerize an existing app using Angular for the frontend, node.js as a backend and Postgres as DB. I've created a network and tried to run the containers one by one without the DB for now but I get an error with the node.js backend.
I've built the backend image with this Dockerfile:
FROM node:10.17.0-alpine
WORKDIR /app
COPY package*.json ./
RUN npm i
COPY . .
EXPOSE 3000
CMD node --experimental-worker backend.js
and the frontend one with this:
FROM node:10.17
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -yq google-chrome-stable
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install
RUN npm install -g #angular/cli#7.3.9
COPY . /app
EXPOSE 4200
CMD ng serve --host 0.0.0.0
I've built the images and I've started the containers with:
docker container run --network my-network -it -p 4200:4200 frontend
docker container run --network my-network -it -p 3000:3000 backend
docker container run --network my-network -it --name my-redis -p 6379:6379 redis
but the backend relies on redis to start so it fails with the error:
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
I've tried:
docker container run --network my-network --link my-redis:redis -it -p 3000:3000
but it doesn't help at all. I'm sorry but I am very new to Docker so any clarification would be useful.
Your backend service is trying to connect to 127.0.0.1:6379, however it should be like my-redis:6379.
Basically, you need to inject the redis host to your backend service. There is multiple ways to do so, but the most common way is to read it from ENV variable (e.g. REDIS_HOST=my-redis)
Im having trouble accessing my containerized rails app from my local machine. I'm following this quickstart guide as a template and made some tweaks to the paths for my gemfile and gemfile.lock. The quickstart guide moves on to docker-compose, but I want to try accessing the app without it first to get familiar with these processes before moving on.
This is my dockerfile:
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile ./Gemfile
COPY Gemfile.lock ./Gemfile.lock
RUN gem install bundler -v 2.0.1
RUN bundle install
COPY . /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000:3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
and this is the entrypoint file:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /myapp/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
I am able to successfully build and run the image, but when I try to access 0.0.0.0:3000 I get a cant connect error.
I also attached a screenshot of my app directory structure, the Dockerfile and entrypoint are at the root.
One thing that seems strange is when I try to run the logs for the container I dont get any output, but when I shut the container down I see the startup logs. Not sure why that is.
I am running docker desktop 2.1.0.3. Any thoughts/help are very appreciated.
use just EXPOSE 3000 in dockerfile.
run container ror with mapping port to localhost from your new docker image <image>
docker run -d --name ror -p 3000:3000 <image>
now you should be able to access localhost:3000
Here's an example of mine that works:
The usual dockerfile, nothing special here.
Then, in docker-compose.yml, add environment variable, or place in .env file, the DATABASE_URL (important bit is using the host.docker.internal instead of localhost
Then in your database.yml, specify the url with the ENV key
Then start the container by running docker-compose up
#Dockerfile
FROM ruby:3.0.5-alpine
RUN apk add --update --no-cache \
bash \
build-base \
tzdata \
postgresql-dev \
yarn \
git \
curl \
wget \
gcompat
COPY Gemfile Gemfile.lock ./
WORKDIR /app
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash
RUN gem install bundler:2.4.3
RUN bundle lock --add-platform x86_64-linux
RUN bundle install
COPY package.json yarn.lock ./
RUN yarn install
COPY . .
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0", "--pid=/tmp/server.pid"]
#docker-compose.yml
version: "3.9"
services:
app:
image: your_app_name
volumes:
- /app
env_file:
- .env
environment:
- DATABASE_URL=postgresql://postgres#host.docker.internal:5432/<your_db_name>
ports:
- "3000:3000"
webpack_dev_server:
build: .
command: bin/webpack-dev-server
ports:
- "3035:3035"
volumes:
- /app
env_file:
- .env
environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
redis:
image: redis
#database.yml
development:
<<: *default
database: giglifepro_development
url: <%= ENV.fetch('DATABASE_URL') %>
Tell me the way to translate docker command into docker-compose command.
Using docker command, I can see my app working on http://localhost:9000.
But in another case, it doesn't.
Though it may be cause of specifying ports mapping, I have no idea.
What's the reason for?
Followings are files and commands I've tried.
dockerfile
FROM node:8.11.3-alpine
WORKDIR /app
RUN apk update \
&& npm install -g npm #vue/cli \
&& npm install
EXPOSE 8080
CMD ["npm", "run", "serve"]
docker-compose.yaml
version: '3'
services:
service:
build: .
ports:
- "9000:8080"
volumes:
- ./:/app
docker command: ok
sudo docker run -it -v $(pwd):/app -p 9000:8080 dockerimage
docker-compose command: problem
sudo dock-compose run service
I have a Docker image which is a node.js application. The app retrieves some configuration value from Redis which is running locally. Because of that, I am trying to install and run Redis within the same container inside the Docker image.
How can I extend the Docker file and configure Redis in it?
As of now, the Dockerfile is as below:
FROM node:carbon
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3011
CMD node /app/src/server.js
The best solution would be to use docker compose. With this you would create a redis container, link to it then start your node.js app. First thing would be to install docker compose detailed here - (https://docs.docker.com/compose/install/).
Once you have it up and running, You should create a docker-compose.yml in the same folder as your app's dockerfile. It should contain the following
version: '3'
services:
myapp:
build: .
ports:
- "3011:3011"
links:
- redis:redis
redis:
image: "redis:alpine"
Then redis will be accessible from your node.js app but instead of localhost:6379 you would use redis:6379 to access the redis instance.
To start your app you would run docker-compose up, in your terminal. Best practice would be to use a network instead of links but this was made for simplicity.
This can also be done as desired, having both redis and node.js on the same image, the following Dockerfile should work, it is based off what is in the question:
FROM node:carbon
RUN wget http://download.redis.io/redis-stable.tar.gz && \
tar xvzf redis-stable.tar.gz && \
cd redis-stable && \
make && \
mv src/redis-server /usr/bin/ && \
cd .. && \
rm -r redis-stable && \
npm install -g concurrently
EXPOSE 6379
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3011
EXPOSE 6379
CMD concurrently "/usr/bin/redis-server --bind '0.0.0.0'" "sleep 5s; node /app/src/server.js"
This second method is really bad practice and I have used concurrently instead of supervisor or similar tool for simplicity. The sleep in the CMD is to allow redis to start before the app is actually launched, you should adjust it to what suits you best. Hope this helps and that you use the first method as it is much better practice
My use case was to add redis server in alpine tomcat flavour:
So this worked:
FROM tomcat:8.5.40-alpine
RUN apk add --no-cache redis
RUN apk add --no-cache screen
EXPOSE 6379
EXPOSE 3011
## Run Tomcat
CMD screen -d -m -S Redis /usr/bin/redis-server --bind '0.0.0.0' && \
${CATALINA_HOME}/bin/catalina.sh run
EXPOSE 8080
If you are looking for a bare minimum docker with nodejs and redis-server, this works :
FROM nikolaik/python-nodejs:python3.5-nodejs8
RUN apt-get update
apt-get -y install redis-server
COPY . /app
WORKDIR /app
nohup redis-server &> redis.log &
and then you can have further steps for your node application.