I am new to Docker. I'm creating a live reload server for a project folder that has static HTML files based on this NPM package. This is what my Dockerfile looks like:
FROM node:8.9.3-alpine
LABEL Name=app-static Version=1.0.0
# Configure container image
ENV NODE_ENV development
WORKDIR /app
VOLUME [ "app" ]
RUN npm install -g live-server#1.2.0
EXPOSE 1764
CMD [ "live-server", "--port=1764", "--entry-file=index.html" ]
This is what my docker-compose.yml file looks like:
version: '2.1'
services:
app-static:
image: app-static
build: .
ports:
- 1764:1764
When I run docker-compose up, my container spins up and I get the following message in my terminal:
Serving "/app" at http://127.0.0.1:1764
When I navigate to http://127.0.0.1:1764/index.html though, I see the following message in my browser window:
Cannot GET /index.html
Why is this happening? Looks like my project files aren't available to my container. Appreciate your help. Thanks.
Related
I have a project that has a docker-compose.yml set up to get it running locally for development purposes. It runs great on Linux (natively) and macOS (using Docker Desktop). I am just finishing getting it running on Windows using WSL2 and Docker Desktop 2.3.0.3 (that has proper WSL2 support). The problem is that my Dockerfile is doing a COPY ./from /to command and Docker doesn't seem to be able to find the file. I have set up a minimal test to recreate the problem.
I have the project set up with this directory structure:
docker/
nginx/
Dockerfile
nginx.conf
docker-compose.yml
The nginx Dockerfile contains:
FROM nginx:1.17.9-alpine
# Add nginx configs
COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
# Copy source code for things like static assets
COPY . /application
# Expose HTTP/HTTPS ports
EXPOSE 80 443
And the docker-compose.yml file contains:
version: "3.1"
services:
nginx:
build: docker/nginx
working_dir: /application
volumes:
- .:/application
ports:
- "80:80"
This is pretty basic - it's just copying the nginx.conf configuration file to /etc/nginx/nginx.conf inside the container.
When I run docker-compose up for this project, from the project root, inside WSL, I receive the following error:
Building nginx
Step 1/4 : FROM nginx:1.17.9-alpine
---> 377c0837328f
Step 2/4 : COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
ERROR: Service 'nginx' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder502655363/docker/nginx/nginx.conf: no such file or directory
This is not what I expect (and not what happens on linux/mac systems) - but I assume it's messing up because of the relative path specified in the Dockerfile? Is this a Docker Desktop bug specifically with WSL, and does anybody know a workaround for the mean time? Thank you!
The path in the Dockerfile should be relative to the build context path. In this example just nginx.conf because the context path is docker/nginx.
I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment
if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else
I am trying to get ember.js to run in a docker container based on the image danlynn/ember-cli image
Tried different versions of Dockerfile and docker compose.yml, but I always end up with the docker-compose up command complaining of the following :
node_modules appears empty, you may need to run npm install
The image and container are created, but the container will not start.
I am new to the docker world, so any help would be greatly appreciated!
I am guessing I need to run npm install for the error to go away, but I added it to the Dockerfile so that It would run as the image is built, but that did not seem to help.
Here is my Dockerfile contents:
FROM danlynn/ember-cli
WORKDIR /code
COPY package.json /code
COPY bower.json /code
RUN ember init
RUN ember init --yarn
RUN bower --allow-root install
RUN npm install
COPY . /code
CMD ["ember", "serve"]
and the docker-compose.yml file:
version: "3"
services:
ember_gui:
build: .
container_name: ember_dev
volumes:
- .:/code
ports:
- "4200:4200"
- "7020:7020"
- "5779:5779"
Finally, here is the package.json just in case
{
"name": "EmberUI",
"version": "0.0.1",
"description": "Test app GUI",
"main": "index.js",
"author": "Testing",
"license": "MIT",
"dependencies": {
"chai": "^4.1.2",
"mocha": "^5.2.0"
}
}
Ok, after much experimentation, I was able to get an ember instance running in docker based on the danlynn/ember-cli image
Lessons learned:
1.- The image is apparently setup to run in the "myApp" directory in the container. I was trying to define a "code" directory to put all the files, but apparently it really did not like that.
2.- The image needs to be initialized after it is installed by running ember init on the service. Not sure why putting the command in a Dockerfile did not work, but you just have to run the following command before you bring the container up with docker-compose up:
docker-compose run --rm ember_gui ember init
where ember_gui is the name of the ember service as per the docker-compose.yml file.
3.- The ember initialization will create a lot of files and sub-directories, so make sure to run it in a directory that has nothing else in it for clarity.
Anyways, here is the content of my docker-compose.yml in case it is useful to anyone else (note that I am no longer using a separate Dockerfile and instead using the image directly):
version: "3"
services:
ember_gui:
image: danlynn/ember-cli
container_name: ember_dev
volumes:
- .:/myapp
command: ember server
ports:
- "4200:4200"
- "7020:7020"
- "7357:7357"
To run it the first time:
docker-compose run --rm ember_gui ember init
docker-compose up
After that, you can just run
docker-compose up
I'm learning about Docker and trying to up a container using php, apache and Lumen Framework. When I execute the command to build the container, returns success.
The problem is when I open the http://localhost:8080 and the page show me 403 - forbidden on apache. I access the container by ssh and I look on the folder /srv/app/ and there's no files. I think that the problem is the mapping of the folder root on the host machine Windows.
I'm using Windows 10;
Anyone can help me?
My DockerFile
FROM php:7.2-apache
LABEL maintainer="rIckSanchez"
COPY docker/php/php.ini /usr/local/etc/php/
COPY . /srv/app
COPY docker/apache/vhost.conf /etc/apache2/site-available/000-default.conf
My docker-compose file
version: '3'
services:
phpinfo:
build: .
ports:
- "8080:80"
I am trying to create a docker-compose.yml which will allow me to start up a few services, where some of those services will have their own Dockerfile. For example:
- my-project
- docker-compose.yml
- web
- Dockerfile
- src/
- worker
- Dockerfile
- src/
I'd like a developer to be able to checkout the project and just run docker-compose up --build to get going.
Also, I'm trying mount the source for a service inside the docker container, so that a developer is able to edit the files on the host machine and those changes will be reflected inside the container immediately (say if it a rails app, it will get recompiled on file change).
I have tried to get just the web service going, but I just cannot mount web directory inside the container: https://github.com/zoran119/haskell-webservice
And here is docker-compose.yml:
version: "2"
services:
web:
build: web
image: web
volumes:
- ./web:/app
Can anyone spot a problem here?
The problem is that host ./web folder shadows the internal /app folder. Which means anything that is inside is overshadowed by your host folder. So you can follow an approach like below
Additional bash script for setups
./scripts/deploy_app.sh
#!/bin/bash
set -ex
# By default checkout the master branch, if none specified
BRANCH=${BRANCH:-master}
cd /usr/src/app
git clone https://github.com/tarunlalwani/docker-nodejs-sample-app .
git checkout $BRANCH
# Install app dependencies
npm install
./scripts/run_app.sh
#!/bin/bash
set -ex
cd /usr/src/app
exec npm start
Dockerfile
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./scripts /scripts
EXPOSE 8080
CMD [ "bash", "-c" , "/scripts/deploy_app.sh && /scripts/run_app.sh"]
Now in your docker-compose you can use below
version: '3'
services:
app:
build:
context: .
volumes:
- ./app:/usr/src/app
Now when you do docker-compose up it will run the /scripts/deploy_app.sh and deploy the app to /usr/src/app inside the container. The host folder ./app will have the source for developers to edit.
You can enhance the script not to download source code if the folder already has data. The branch of the code can be controlled using the BRANCH environment variables. If you want you can even run the script in Dockerfile as well to have images which by default contains the source
See a detailed article I wrote about Static and Dynamic code deployment
http://tarunlalwani.com/post/deploying-code-inside-docker-images-statically-dynamically/
So it seems your issue turn out to be be something different
When I try docker-compose up i get below error
Attaching to hask_web_1
web_1 | Version 1.3.2, Git revision 3f675146590da4f3edf768b89355f798229da2a5 (4395 commits) x86_64 hpack-0.15.0
web_1 | 2017-09-10 09:58:04.741696: [debug] Checking for project config at: /app/stack.yaml
web_1 | #(Stack/Config.hs:863:9)
web_1 | 2017-09-10 09:58:04.741873: [debug] Loading project config file stack.yaml
web_1 | #(Stack/Config.hs:881:13)
web_1 | <stdin>: hGetLine: end of file
hask_web_1 exited with code 1
But when i use docker-compose run web i get the following output
$ docker-compose run web
Version 1.3.2, Git revision 3f675146590da4f3edf768b89355f798229da2a5 (4395 commits) x86_64 hpack-0.15.0
2017-09-10 09:58:37.859351: [debug] Checking for project config at: /app/stack.yaml
#(Stack/Config.hs:863:9)
2017-09-10 09:58:37.859580: [debug] Loading project config file stack.yaml
#(Stack/Config.hs:881:13)
2017-09-10 09:58:37.862281: [debug] Trying to decode /root/.stack/build-plan-cache/x86_64-linux/lts-9.3.cache
So that made me realize your issue. docker-compose up and docker-compose run has one main difference that is the tty. Run allocates a tty while up doesn't. So you need to change the compose to
version: "2"
services:
web:
build: web
image: web
volumes:
- ./web:/app
tty: true