I dockerized my mean application with docker-compose. This works fine.
Now I try to use "volumes" so that my angular app (with ng serve) and my express app (with nodemon.js) auto-restart when coding.
But identical error appears for both angular and express container :
angular_1 |
angular_1 | up to date in 1.587s
angular_1 | found 0 vulnerabilities
angular_1 |
angular_1 | npm ERR! path /usr/src/app/package.json
angular_1 | npm ERR! code ENOENT
angular_1 | npm ERR! errno -2
angular_1 | npm ERR! syscall open
angular_1 | npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
angular_1 | npm ERR! enoent This is related to npm not being able to find a file.
angular_1 | npm ERR! enoent
angular_1 |
angular_1 | npm ERR! A complete log of this run can be found in:
angular_1 | npm ERR! /root/.npm/_logs/2019-04-07T20_51_38_933Z-debug.log
harmonie_angular_1 exited with code 254
See my folder hierarchy :
-project
-client
-Dockerfile
-package.json
-server
-Dockerfile
-package.json
-docker-compose.yml
Here's my Dockerfile for angular :
# Create image based on the official Node 10 image from dockerhub
FROM node:10
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package*.json /usr/src/app/
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app/
# Expose the port the app runs in
EXPOSE 4200
# Serve the app
CMD ["npm", "start"]
My Dockerfile for express :
# Create image based on the official Node 6 image from the dockerhub
FROM node:6
# Create a directory where our app will be placed
RUN mkdir -p /usr/src/app
# Change directory so that our commands run inside this new directory
WORKDIR /usr/src/app
# Copy dependency definitions
COPY package*.json /usr/src/app/
# Install dependecies
RUN npm install
# Get all the code needed to run the app
COPY . /usr/src/app/
# Expose the port the app runs in
EXPOSE 3000
# Serve the app
CMD ["npm", "start"]
And finally my docker-compose.yml
version: '3' # specify docker-compose version
# Define the services/containers to be run
services:
angular: # name of the first service
build: client # specify the directory of the Dockerfile
ports:
- "4200:4200" # specify port forwarding
#WHEN ADDING VOLUMES, ERROR APPEARS!!!!!!
volumes:
- ./client:/usr/src/app
express: #name of the second service
build: server # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database
#WHEN ADDING VOLUMES, ERROR APPEARS!!!!!!
volumes:
- ./server:/usr/src/app
database: # name of the third service
image: mongo # specify image to build container from
ports:
- "27017:27017" # specify port forwarding
I also had this error, it turned out to be an issue with my version of docker-compose. I'm running WSL on windows 10 and the version of docker-compose installed inside WSL did not handle volume binding correctly. I fixed this by removing /usr/local/bin/docker-compose and then adding an alias to the windows docker-compose executable alias docker-compose="/mnt/c/Program\ Files/Docker/Docker/resources/bin/docker-compose.exe"
If The above does not apply to you then try to update your version of docker-compose
you volumes section should look like this:
volumes:
- .:/usr/app
- /usr/app/node_modules
after mounting source folder node_modules in Docker container is 'overwritten' so you need to add the '/usr/app/node_modules'. Full tutorial with proper docker-compose.yml - https://all4developer.blogspot.com/2019/01/docker-and-nodemodules.html
Related
I am receiving a permission error when attempting to map volumes over. I don't understand because I should be root by default as far as I know. I tried chmodding the files 777 on my host but no effect.
client_1 |
client_1 | > web
client_1 | > expo start --web
client_1 |
client_1 | Uncaught Error Error: EACCES: permission denied, mkdir '/root/.expo'
arglasses_client_1 exited with code 1
# pull base image
FROM node:14
# set our node environment, either development or production
ARG NODE_ENV=development
ENV NODE_ENV $NODE_ENV
# default to port 19006 for node, and 19001 and 19002 (tests) for debug
ARG PORT=19006
ENV PORT $PORT
EXPOSE $PORT 19001 19002
# install global packages
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH /home/node/.npm-global/bin:$PATH
RUN npm i --unsafe-perm -g npm#latest expo-cli#latest
# install dependencies first, in a different location for easier app bind mounting for local development
RUN mkdir /opt/client
WORKDIR /opt/client
ENV PATH /opt/client/.bin:$PATH
# copy in our source code last, as it changes the most
WORKDIR /opt/client/app
# for development, we bind mount volumes; comment out for production
COPY . .
RUN npm install
ENTRYPOINT ["npm", "run"]
CMD ["web"]
services:
client:
build: client
environment:
- NODE_ENV=development
tty: true
ports:
- '19006:19006'
- '19001:19001'
- '19002:19002'
depends_on:
- server
volumes:
- ./client:/opt/client/app
- /opt/client/app/node_modules
I am running Docker on Windows 10 and when I run docker-compose up -d I get this errror but I don't know why.
npm WARN saveError ENOENT: no such file or directory, open '/var/www/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/var/www/package.json'
npm WARN www No description
npm WARN www No repository field.
npm WARN www No README data
npm WARN www No license field.
Here is my docker-compose.yaml file
version: '3'
services:
# Nginx client app server
nginx-client:
container_name: nginx-client
build:
context: ./docker/nginx-client
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 28874:3000
volumes:
- ./client:/var/www
networks:
- app-network
# Networks
networks:
app-network:
driver: bridge
And here is my Dockerfile
FROM node:12
WORKDIR /var/www
RUN npm install
CMD ["npm", "run", "serve"]
This is happening because first you are building an image out of your Dockerfile, which contains this commands:
WORKDIR /var/www
RUN npm install
But right now this directory is empty. It's only after the container is created, bind mounting will take place. You can read more about it in the docs, it states:
A service definition contains configuration that is applied to each
container started for that service, much like passing command-line
parameters to docker run. Likewise, network and volume definitions are
analogous to docker network create and docker volume create.
If you need to have this file available at the image build time, I'd suggest using COPY command, like:
COPY ./client/package*.json ./
RUN npm install
Quick question regarding Dockerfile.
I've got a folder structure like so:
docker-compose.yml
client
src
package.json
Dockerfile
...etc
Client folder contains reactjs application and root is nodejs server with typescript. I've created Dockerfile like so:
FROM node
RUN mkdir -p /server/node_modules && chown -R node:node /server
WORKDIR /server
USER node
COPY package*.json ./
RUN npm install
COPY --chown=node:node . ./dist
RUN npm run build-application
COPY /src/views ./dist/src/views
COPY /src/public ./dist/src/public
EXPOSE 4000
CMD node dist/src/index.js
npm run build-application command executes client build (npm run build --prefix ./client) and server(rimraf dist && mkdir dist && tsc -p .). The problem is that Docker cannot find client folder error:
npm ERR! enoent ENOENT: no such file or directory, open '/server/client/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
Can someone explain why? And how to fix this?
Docker compose file:
...
server:
build:
context: ./server
dockerfile: Dockerfile
image: mazosios-pedutes-server
container_name: mazosios-pedutes-server
restart: unless-stopped
networks:
- app-network
env_file:
- ./server/.env
ports:
- "4000:4000"
Since the error is saying that there is no client/package.json in /server my question is the following one.
Is your ./client directory located within /server?
Dockerfile WORKDIR instruction makes all commands that follow it to be executed within the directory that you pass to WORKDIR as parameter.
I guess if you add RUN tree -d (lists only nested directories) after the last COPY instruction, you will be able to see where your client directory is located and you will be able to fix the path to it.
I am new to the Docker. I am trying to create a docker image for the NodeJS project which I will upload/host on Docker repository. When I execute docker-compose up -d everything works fine and I can access the nodeJS server that is hosted inside docker containers. After that, I stopped all containers and tried to create a docker image from Dockerfiles using following commands:
docker build -t adonis-app .
docker run adonis-app
The first command executes without any error but the second command throws this error:
> adonis-fullstack-app#4.1.0 start /app
> node server.js
internal/modules/cjs/loader.js:983
throw err;
^
Error: Cannot find module '/app/server.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:980:15)
at Function.Module._load (internal/modules/cjs/loader.js:862:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! adonis-fullstack-app#4.1.0 start: `node server.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the adonis-fullstack-app#4.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /app/.npm/_logs/2020-02-09T17_33_22_489Z-debug.log
Can someone help me with this error and tell me what is wrong with it?
Dockerfile I am using:
FROM node
ENV HOME=/app
RUN mkdir /app
ADD package.json $HOME
WORKDIR $HOME
RUN npm i -g #adonisjs/cli && npm install
CMD ["npm", "start"]
docker-compose.yaml
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
ports:
- '3306:3306'
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_USER: ${DB_USER}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_RANDOM_ROOT_PASSWORD: 1
networks:
- api-network
adonis-api:
container_name: "${APP_NAME}-api"
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
- /app/node_modules
ports:
- "3333:3333"
depends_on:
- adonis-mysql
networks:
- api-network
networks:
api-network:
Your Dockerfile is missing a COPY step to actually copy your application code into the image. When you docker run the image, there's no actual source code to run, and you get the error you're seeing.
Your Dockerfile should look more like:
FROM node
WORKDIR /app # creates the directory; no need to set $HOME
COPY package.json package.lock .
RUN npm install # all of your dependencies are in package.json
COPY . . # actually copy the application in
CMD ["npm", "start"]
Now that your Docker image is self-contained, you don't need the volumes: that try to inject host content into it. You can also safely rely on several of Docker Compose's defaults (the default network and the generated container_name: are both fine to use). A simpler docker-compose.yml looks like
version: '3.3'
services:
adonis-mysql:
image: mysql:5.7
# As you have it, except delete the networks: block
adonis-api:
build: . # this directory is context:, use default dockerfile:
ports:
- "3333:3333"
depends_on:
- adonis-mysql
There's several key problems in the set of artifacts and commands you show:
docker run and docker-compose ... are separate commands. In the docker run command you show, it runs the image, as-is, with its default command, with no volumes mounted, and with no ports published. docker run doesn't know about the docker-compose.yml file, so whatever options you have specified there won't have an effect. You might mean docker-compose up, which will also start the database. (In your application remember to try several times for it to come up, it often can take 30-60 seconds.)
If you're planning to push the image, you need to include the source code. You're essentially creating two separate artifacts in this setup: a Docker image with Node and some libraries, and also a Javascript application on your host. If you docker push the image, it won't include the application (because you're not COPYing it in), so you'll also have to separately distribute the source code. At that point there's not much benefit to using Docker; an end user may as well install Node, clone your repository, and run npm install themselves.
You're preventing Docker from seeing library updates. Putting node_modules in an anonymous volume seems to be a popular setup, and Docker will copy content from the image into that directory the first time you run the application. The second time you run the application, Docker sees the directory already exists, assumes it to contain valuable user data, and refuses to update it. This leads to SO questions along the lines of "I updated my package.json file but my container isn't updating".
Your docker-compose.yaml file has two services:
adonis-mysql
adonis-api
Only the second item is using the current docker file. As can be seen by the following section:
build:
context: .
dockerfile: Dockerfile
The command docker build . will only build the image in current docker file aka adonis-api. And then it is run.
So most probably it could be the missing mysql service that is giving you the error. You can verify by running
docker ps -aq
to check if the sql container is also running. Hope it helps.
Conclusion: Use docker-compose.
Installed docker 18.03 on vsts agent box(self-hosted VSTS agent)
The user under which the agent is running has been added to the docker group.
When I try to build using Docker Compose task in VSTS, the build fails with error:
Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
/usr/local/bin/docker-compose failed with return code: 1
I have been stuck in this for few hours, any help will be awesome.
One more note: docker compose works perfectly fine from the agent box, but when the build is triggered by VSTS task I get this error.
docker-compose file:
version: '3'
services:
some-api:
build:
context: .
dockerfile: .docker/dockerfile1
image: some.azurecr.io/some-api:latest
container_name: 'some-api'
ports:
- '8080:80'
some-website:
build:
context: .
dockerfile: .docker/dockerfile2
image: some.azurecr.io/some-website:latest
container_name: 'some-website'
ports:
- '3434:3434'
dockerfile -api
FROM microsoft/dotnet AS build
# Docker image container .NET Core SDK
COPY .api/ ./some-api
WORKDIR /some-api
RUN dotnet restore; dotnet publish -o out
# final image
FROM microsoft/aspnetcore
# .NET Core runtime-only image
COPY --from=build /some-api/out /some-api
WORKDIR /some-api
EXPOSE 80
ENTRYPOINT [ "dotnet", "some.dll" ]
dockerfile-website
#----------------------
### STAGE 1: BUILD ###
#---------------------
# Building node from LTS version
FROM node:8.11.1 as builder
# Installing npm to remove warnings and optimize the container build process
# One of many warnings: npm WARN notice [SECURITY] deep-extend has 1 low vulnerability.
#Go here for more details: https://nodesecurity.io/advisories?search=deep-extend&version=0.5.0 -
#Run `npm i npm#latest -g` to upgrade your npm version, and then `npm audit` to get more info.
RUN npm install npm#latest -g
# Copying all necessary files required for npm install
COPY package.json ./
# Install npm dependencies in a different folder to optimize container build process
RUN npm install
# Create application directory and copy node modules to it
RUN mkdir /some-website
RUN cp -R ./node_modules ./some-website
# Setting application directory as work directory
WORKDIR /some-website
# Copying application code to container application directory
COPY . .
# Building the angular app
RUN npm run build.prod
#--------------------------------------------------
### STAGE 2: Setup nginx and Deploy application ###
#--------------------------------------------------
FROM nginx:latest
## Copy defualt ngninx configuration file
COPY default.conf /etc/nginx/conf.d
## Remove default nginx website
RUN rm -rf /usr/share/nginx/hmtl/*
# Copy dist folder from the builder to nginx public folder(STAGE 1)
COPY --from=builder /some-website/dist/prod /usr/share/nginx/html
CMD ["nginx","-g","daemon off;"]
Thanks
The issue was user permissions. So after adding a user to the docker group,
sudo usermod -aG docker $USER
logging out and logging in didn't work. I had to reboot my ubuntu server in order for permissions to take effect.