I'm trying to speed up spinning up docker by having all current packages in yarn.lock be installed on the image already. I think I'm doing yarn install incorrectly, that it is working somewhere else?
relevant part of dockerfile:
# Create a dir
WORKDIR /(WORKDIR)
# Time to install all our dependencies
COPY package.json /(WORKDIR)/package.json
COPY yarn.lock /(WORKDIR)/yarn.lock
# Need the executables to be in the path
ENV PATH /(WORKDIR)/node_modules/.bin:$PATH
RUN yarn check --verify-tree || yarn install --frozen-lockfile
I think my last line is incorrect. It is installing somewhere, but not on the package itself? Either that or caching might be an issue. If I start the image I find the output of yarn check --verify-tree is still the current state of the image.
I was unable to yarn install during my Docker build. I was getting networking errors (getaddrinfo EAI_AGAIN):
Step 6/8 : COPY yarn.lock /app/yarn.lock
---> Using cache
---> e55488a3d051
Step 7/8 : RUN yarn
---> Running in 5bb8d663d00b
yarn install v1.22.15
[1/4] Resolving packages...
[2/4] Fetching packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/app-root-path/-/app-root-path-3.0.0.tgz: getaddrinfo EAI_AGAIN registry.yarnpkg.com".
info If you think this is a bug, please open a bug report with the information provided in "/app/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
ERROR: Service 'web-svc' failed to build: The command '/bin/sh -c yarn' returned a non-zero code: 1
I recently updated my Ubuntu Linux system, and figured that might be the cause. So, I restarted the docker.service and that resolved the issue.
sudo systemctl restart docker.service
Just RUN yarn and make sure COPY code base after yarn.
FROM node:12.14.0-alpine3.11
ENV NODE_ENV=production
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn
COPY src ./
I test it in my machine, you can see if I change yarn.lock. And if I don't change my yarn.lock
$ docker build -t demo .
Step 1/6 : FROM node:12.14.0-alpine3.11
---> 1cbcaddb8074
Step 2/6 : ENV NODE_ENV=production
---> Using cache
---> dc7f1a2f7d90
Step 3/6 : WORKDIR /app
---> Using cache
---> eec9363713a5
Step 4/6 : COPY package.json ./
---> Using cache
---> fde6cf7bb577
Step 5/6 : COPY yarn.lock ./
---> 6a1369622d79
Step 6/6 : RUN yarn
---> Running in ff6433969bea
yarn install v1.21.1
[1/4] Resolving packages...
[2/4] Fetching packages...
warning sha.js#2.4.11: Invalid bin entry for "sha.js" (in "sha.js").
warning url-loader#1.1.2: Invalid bin field for "url-loader".
info fsevents#1.2.9: The platform "linux" is incompatible with this module.
info "fsevents#1.2.9" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning " > styled-components#5.0.1" has unmet peer dependency "react-is#>= 16.8.0".
[4/4] Building fresh packages...
Done in 35.97s.
Removing intermediate container ff6433969bea
---> 8dcd2124289d
Successfully built 8dcd2124289d
$docker build -t demo .
Step 1/6 : FROM node:12.14.0-alpine3.11
---> 1cbcaddb8074
Step 2/6 : ENV NODE_ENV=production
---> Using cache
---> dc7f1a2f7d90
Step 3/6 : WORKDIR /app
---> Using cache
---> eec9363713a5
Step 4/6 : COPY package.json ./
---> Using cache
---> fde6cf7bb577
Step 5/6 : COPY yarn.lock ./
---> Using cache
---> 6a1369622d79
Step 6/6 : RUN yarn
---> Using cache
---> 8dcd2124289d
Step 7/7 : COPY src ./
---> 13474b882e11
Related
i do not understand why docker cannot get my angular build folder in container.
Can you see that to help me?
If i build with docker compose command i have this error.
Below are all the steps to build my image and launch my container until the error.
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Building linking-front
Sending build context to Docker daemon 425.6MB
Step 1/9 : FROM node:16.19.0 AS build
---> b22f8aab05da
Step 2/9 : WORKDIR /usr/src/app
---> Using cache
---> 5e2431455b65
Step 3/9 : COPY package.json package-lock.json ./
---> Using cache
---> 11d677269b0e
Step 4/9 : RUN npm install
---> Using cache
---> b5544be9159b
Step 5/9 : COPY . .
---> Using cache
---> 3403bfda57ca
Step 6/9 : RUN npm run build
---> Using cache
---> ae8e7960ac33
Step 7/9 : FROM nginx:1.23.3-alpine
---> 2bc7edbc3cf2
Step 8/9 : COPY nginx.conf /etc/nginx/nginx.conf
---> Using cache
---> beca38c7be94
Step 9/9 : COPY --from=build /usr/src/app/dist/linkingEducationSecurity-front /usr/share/nginx/html
COPY failed: stat usr/src/app/dist/linkingEducationSecurity-front: file does not exist
ERROR: Service 'linking-front' failed to build : Build failed
### STAGE 1: Build ###
FROM node:16.19.0 AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.23.3-alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build /usr/src/app/dist/linkingEducationSecurity-front /usr/share/nginx/html
I use also docker-compose
version: '3.9'
services:
linking-front:
build: ./linkingEducationSecurity-front/
ports:
- "8080:80"
volumes:
- type: bind
source: ./linkingEducationSecurity-front/src/
target: /app/src
since in the comments you tried to do RUN cd dist && ls which gave you this output :
Step 7/10 : RUN cd dist && ls ---> Running in e8f002e82f3a linking-education-security-front
The steps and dockerfile are perfect. the COPY command from build folder is missing its spell
update this line :
COPY --from=build /usr/src/app/dist/linkingEducationSecurity-front /usr/share/nginx/html
to this :
COPY --from=build /usr/src/app/dist/linking-education-security-front /usr/share/nginx/html
and try rebuilding , this might work.
I have this multistage Dockerfile:
FROM ruby:2.7.6 as base
ENV RAILS_ENV production
RUN apt-get update
RUN apt-get install -y \
nodejs \
npm \
vim
RUN npm install -g yarn
COPY . /app
WORKDIR /app
RUN gem install bundler
RUN bundle config set --local deployment 'true'
RUN bundle config set --local without 'development test'
RUN bundle
RUN DB_ADAPTER=nulldb bundle exec rake assets:precompile
FROM base as app
COPY . /app
WORKDIR /app
RUN gem install bundler
RUN bundle
RUN DB_ADAPTER=nulldb bundle exec rake assets:precompile
ENTRYPOINT ["/bin/bash", "-l", "-c"]
I build base with:
docker build --target base -t base .
Then app with:
docker build --target app -t app .
Pretty straight forward. The steps before app are slow, and I want to cache them.
Frequently (though not always - and no idea why) when I build app It will build steps after the first COPY in the base build. Why is this? Why isn't it just starting from the base image and building the later steps?
$ docker build --target app -t fapp .
Step 1/18 : FROM ruby:2.7.6 as base
---> f5dd208fb679
Step 2/18 : ENV RAILS_ENV production
---> Using cache
---> 1891c50dd23b
Step 3/18 : RUN apt-get update
---> Using cache
---> 2e8e284f77ec
Step 4/18 : RUN apt-get install -y nodejs npm vim
---> Using cache
---> 70695d86c467
Step 5/18 : RUN npm install -g yarn
---> Using cache
---> 706a007e43c6
Step 6/18 : COPY . /app
---> ad3eb2821641
Step 7/18 : WORKDIR /app
---> Running in 7a7f83850b35
Removing intermediate container 7a7f83850b35
---> f9b1984c059a
Step 8/18 : RUN gem install bundler
---> Running in 29c9929d887d
Successfully installed bundler-2.3.15
1 gem installed
Removing intermediate container 29c9929d887d
---> 619dc2797660
Step 9/18 : RUN bundle config set --local deployment 'true'
---> Running in 36c4e27ae841
Removing intermediate container 36c4e27ae841
---> 33d6e03834b9
Step 10/18 : RUN bundle config set --local without 'development test'
---> Running in 94afe6d016e9
Removing intermediate container 94afe6d016e9
---> 90796cebb446
Step 11/18 : RUN bundle
---> Running in 9ea7167d9aff
The docker cache can be easily invalidated from a COPY or ADD command, since the docker cache for these commands checks a hash of the files and directories.
Included in that hash are the contents of every file, and even the permissions on the files. So if any of these changed by a single byte, the hash will be different and docker will have a cache miss, forcing the line to be rerun.
From the point of the first cache miss, all remaining lines will need to be rebuilt since the preceding layer is now new and has not be used to run any of the following steps.
When I run
docker build -f docker/webpack.docker services/webpack --build-arg env=production
twice in a row, Docker builds my image each time, starting from the first RUN (the COPY uses the cache).
FROM node:lts
ARG env=production
ENV NODE_ENV=$env
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production=false --non-interactive
COPY . .
RUN node --max-old-space-size=20000 node_modules/.bin/svg2fonts icons -o assets/markons -b mrkn -f markons -n Markons
RUN node --max-old-space-size=20000 node_modules/.bin/webpack --progress
How can I get it to cache those RUNs?
Output looks like:
Sending build context to Docker daemon 3.37MB
Step 1/9 : FROM node:lts
---> 0c601cba9f11
Step 2/9 : ARG env=production
---> Using cache
---> dd38b2167c75
Step 3/9 : ENV NODE_ENV=$env
---> Using cache
---> 800f5afd416c
Step 4/9 : WORKDIR /app
---> Using cache
---> d15b93dce11d
Step 5/9 : COPY package.json yarn.lock ./
---> Using cache
---> a049dd1609a8
Step 6/9 : RUN yarn install --frozen-lockfile --production=false --non-interactive
---> Using cache
---> d5e51b0d556c
Step 7/9 : COPY . .
---> 92990e326d4b
Step 8/9 : RUN node --max-old-space-size=20000 node_modules/.bin/svg2fonts icons -o assets/markons -b mrkn -f markons -n Markons
---> Running in a23878db7b0e
Wrote assets/markons/markons.css
Wrote assets/markons/markons.js
Wrote assets/markons/markons.html
Wrote assets/markons/markons-chars.json
Wrote assets/markons/markons.svg
Wrote assets/markons/markons.ttf
Wrote assets/markons/markons.woff
Wrote assets/markons/markons.woff2
Wrote assets/markons/markons.eot
Removing intermediate container a23878db7b0e
---> 3bce79d0ecf0
Step 9/9 : RUN node --max-old-space-size=20000 node_modules/.bin/webpack --progress
---> Running in b6d460488950
<s> [webpack.Progress] 0% compiling
...
See the description:
If the contents of all external files on the first COPY command are
the same, the layer cache will be used and all subsequent commands
until the next ADD or COPY command will use the layer cache.
However, if the contents of one or more external files are different,
then all subsequent commands will be executed without using the layer
cache.
So every time the content is changed two last RUN will be executed with no cache. There is no way to control caching yet. Maybe it's a better option to specify volumes?
I keep getting a Could not find a required file for my docker build and I am not sure why. I am new to docker so is there a way I can "step through" the build process? I am running
docker build .
when I am INSIDE the /redribbon-client directory
output:
$ docker build .
Sending build context to Docker daemon 315.9MB
Step 1/6 : FROM node:alpine as builder
---> 4acd7c5129dc
Step 2/6 : WORKDIR "/client"
---> Using cache
---> c57cd917bb87
Step 3/6 : COPY ./package.json .
---> Using cache
---> 4098880ac4a5
Step 4/6 : RUN npm install
---> Using cache
---> 1015d2aec06c
Step 5/6 : COPY ./src/ .
---> Using cache
---> 5c895812a8c8
Step 6/6 : RUN npm run start
---> Running in 63ff064c6e84
> redribbon-client#0.1.0 start /client
> react-scripts start
Could not find a required file.
Name: index.html
Searched in: /client/public
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! redribbon-client#0.1.0 start: `react-scripts start`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the redribbon-client#0.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2020-02-04T03_06_26_933Z-debug.log
project structure:
/redribbon-client
Dockerfile
/src
/public
package.json
Dockerfile
FROM node:alpine as builder
WORKDIR "/client"
COPY ./package.json .
RUN npm install
COPY ./src/ .
RUN npm run start
You will need to copy the public folder over as well. The error stated that it was trying to look for public folder
Could not find a required file.
Name: index.html
Searched in: /client/public
Can you try again with the new Dockerfile?
FROM node:alpine as builder
WORKDIR "/client"
COPY ./package.json .
RUN npm install
COPY ./src/ .
COPY ./public/ .
RUN npm run start
Alternatively, you can copy everything over like this: COPY . .
I'm trying to automatically test PRs to my project using Docker Cloud. I've set up a build rule as follows:
Dockerfile:
FROM node:8.4.0-alpine
ENV NODE_ENV=production
WORKDIR /olimat/api
COPY package.json package-lock.json ./
RUN npm install --quiet
COPY ./public ./public
COPY ./config ./config
COPY ./src ./src
CMD npm start
Dockerfile.dev:
FROM node:8.4.0-alpine
WORKDIR /olimat/api
COPY package.json package-lock.json ./
RUN npm install --quiet
COPY ./public ./public
COPY ./config ./config
COPY ./src ./src
COPY ./db ./db
docker-compose.test.yml:
version: '3.2'
services:
sut:
build:
context: ./
dockerfile: Dockerfile.dev
command: npm test
depends_on:
- api
environment:
NODE_ENV: test
api:
build:
context: ./
dockerfile: Dockerfile
depends_on:
- db
db:
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: dev123
image: postgres:9.6.4-alpine
Running the tests locally, with docker-compose -f docker-compose.test.yml run sut Everything works fine:
On Docker Cloud, the tests run, but it seems to never return the exit code:
I've canceled after 1 hour and 46 minutes here. What's happening? How can I make the sut service container exit after the tests are run?
The complete build log:
Building in Docker Cloud's infrastructure...
Cloning into '.'...
Warning: Permanently added the RSA host key for IP address '192.30.253.112' to the list of known hosts.
Reset branch 'master'
Your branch is up-to-date with 'origin/master'.
Pulling cache layers for index.docker.io/unemat/olimat-backend:latest...
Done!
KernelVersion: 4.4.0-93-generic
Arch: amd64
BuildTime: 2017-08-17T22:50:04.828747906+00:00
ApiVersion: 1.30
Version: 17.06.1-ce
MinAPIVersion: 1.12
GitCommit: 874a737
Os: linux
GoVersion: go1.8.3
Starting build of index.docker.io/unemat/olimat-backend:latest...
Step 1/9 : FROM node:8.4.0-alpine
---> 016382f39a51
Step 2/9 : ENV NODE_ENV production
---> Running in b0aa12f6d329
---> 8c0420481faa
Removing intermediate container b0aa12f6d329
Step 3/9 : WORKDIR /olimat/api
---> 669997c76951
Removing intermediate container b9344977ce13
Step 4/9 : COPY package.json package-lock.json ./
---> 562fb1b9d9db
Removing intermediate container 3778fb63cd12
Step 5/9 : RUN npm install --quiet
---> Running in 459a90d4ce4f
> uws#0.14.5 install /olimat/api/node_modules/uws
> node-gyp rebuild > build_log.txt 2>&1 || exit 0
added 261 packages in 19.34s
---> a22bd7c951bd
Removing intermediate container 459a90d4ce4f
Step 6/9 : COPY ./public ./public
---> 3555f3f71011
Removing intermediate container f6343f447c14
Step 7/9 : COPY ./config ./config
---> ffebbe0eae44
Removing intermediate container 1b6a25d1b044
Step 8/9 : COPY ./src ./src
---> ae66609e0177
Removing intermediate container a139a0a67b34
Step 9/9 : CMD npm start
---> Running in b1bc735877c5
---> fba69367a862
Removing intermediate container b1bc735877c5
Successfully built fba69367a862
Successfully tagged unemat/olimat-backend:latest
Starting Test in docker-compose.test.yml...
db uses an image, skipping
Building api
Step 1/9 : FROM node:8.4.0-alpine
---> 016382f39a51
Step 2/9 : ENV NODE_ENV production
---> Using cache
---> 8c0420481faa
Step 3/9 : WORKDIR /olimat/api
---> Using cache
---> 669997c76951
Step 4/9 : COPY package.json package-lock.json ./
---> Using cache
---> 562fb1b9d9db
Step 5/9 : RUN npm install --quiet
---> Using cache
---> a22bd7c951bd
Step 6/9 : COPY ./public ./public
---> Using cache
---> 3555f3f71011
Step 7/9 : COPY ./config ./config
---> Using cache
---> ffebbe0eae44
Step 8/9 : COPY ./src ./src
---> Using cache
---> ae66609e0177
Step 9/9 : CMD npm start
---> Using cache
---> fba69367a862
Successfully built fba69367a862
Successfully tagged bs3klcfwuijavr4uf4daf28_api:latest
Building sut
Step 1/9 : FROM node:8.4.0-alpine
---> 016382f39a51
Step 2/9 : MAINTAINER Josias Iquabius
---> Running in ed1306bea19a
---> 5956fb44e0cc
Removing intermediate container ed1306bea19a
Step 3/9 : WORKDIR /olimat/api
---> be7fd8615cd4
Removing intermediate container 2bde5cfe6bdd
Step 4/9 : COPY package.json package-lock.json ./
---> b68a99364f80
Removing intermediate container d0f4715b4774
Step 5/9 : RUN npm install --quiet
---> Running in f9f053df7774
> uws#0.14.5 install /olimat/api/node_modules/uws
> node-gyp rebuild > build_log.txt 2>&1 || exit 0
added 666 packages in 32.983s
---> 8f2ace5a6f9e
Removing intermediate container f9f053df7774
Step 6/9 : COPY ./public ./public
---> 0cac78c670e2
Removing intermediate container ab0f50cbc747
Step 7/9 : COPY ./config ./config
---> ce57c484d544
Removing intermediate container 126828beed7d
Step 8/9 : COPY ./src ./src
---> 7cd682b0f4d9
Removing intermediate container 819d441c2307
Step 9/9 : COPY ./db ./db
---> 244561b4bc52
Removing intermediate container 1a80d8f935b4
Successfully built 244561b4bc52
Successfully tagged bs3klcfwuijavr4uf4daf28_sut:latest
Creating network "bs3klcfwuijavr4uf4daf28_default" with the default driver
Pulling db (postgres:9.6.4-alpine)...
9.6.4-alpine: Pulling from library/postgres
Digest: sha256:5fd73de311d304caeb4f907d4f559d322805abc622e4baf5788c6a079ee5224e
Status: Downloaded newer image for postgres:9.6.4-alpine
Creating bs3klcfwuijavr4uf4daf28_db_1 ...
Creating bs3klcfwuijavr4uf4daf28_db_1
Creating bs3klcfwuijavr4uf4daf28_db_1 ... done Creating bs3klcfwuijavr4uf4daf28_api_1 ...
Creating bs3klcfwuijavr4uf4daf28_api_1
Creating bs3klcfwuijavr4uf4daf28_api_1 ... done Creating bs3klcfwuijavr4uf4daf28_sut_1 ...
Creating bs3klcfwuijavr4uf4daf28_sut_1
Creating bs3klcfwuijavr4uf4daf28_sut_1 ... done
npm info it worked if it ends with ok
npm info using npm#5.3.0
npm info using node#v8.4.0
npm info lifecycle olimat-backend#0.0.1~pretest: olimat-backend#0.0.1
npm info lifecycle olimat-backend#0.0.1~test: olimat-backend#0.0.1
> olimat-backend#0.0.1 test /olimat/api
> jest
PASS src/services/questions/questions.test.js
● Console
console.log src/models/questions.model.js:10
questions table does not exists!
info: after: questions - Method: find
PASS src/app.test.js
Test Suites: 2 passed, 2 total
Tests: 5 passed, 5 total
Snapshots: 0 total
Time: 6.505s
Ran all test suites.
Build canceled.
ERROR: Build failed with exit code 3
Build in 'master:/api' (4eeca024) canceled after 1:46:31