Having gone through different solutions provided on this platform and surfing the web, my error keeps popping up again and again.
I am inclined to drop my question this time around.
While running docker build -t myimage ., I keep on seeing the following error output
...
#5 sha256:092586df92068bd6b59c497f379e48302aa1b27cf76b2de64d262ba7bc19e47b 2.10MB / 45.38MB 316.4s
#5 sha256:87fc2710b63fcf6c0a5c876b1b37773d9e949cb4d66eeb06889ef84f7b5a5a93 25.17MB / 214.83MB 964.5s
#5 DONE 966.7s
#7 [2/5] COPY . /app
#7 sha256:0930007e6ec71d357846ab0d691d72a68ae6ef349d84b3a69cec532b93def08c
#7 ERROR: failed to read expected number of bytes: unexpected EOF
------
> [2/5] COPY . /app:
------
failed to compute cache key: failed to read expected number of bytes: unexpected EOF
Here is my Dockerfile content:
FROM python:stretch
# Use the `python:3.7` as a source image from the Amazon ECR Public Gallery
# We are not using `python:3.7.2-slim` from Dockerhub because it has put a pull rate limit.
# FROM public.ecr.aws/sam/build-python3.7:latest
# FROM python:3.7
# Set up an app directory for your code
COPY . /app
WORKDIR /app
# Install `pip` and needed Python packages from `requirements.txt`
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Define an entrypoint which will run the main app using the Gunicorn WSGI server.
ENTRYPOINT ["gunicorn", "-b", ":8080", "main:APP"]
Also, this is my project structure:
CODEOWNERS
__pycache__/
buildspec.yml
main.py
test_main.py
Dockerfile
app/
ci-cd-codepipeline.cfn.yml
requirements.txt
README.md
aws-auth-patch.yml
iam-role-policy.json
simple_jwt_api.yml
venv/
I was asked to create app directory but this couldn't work. I would be pleased to have it resolved.
Note: my docker desktop is running perfectly.
Related
I had a docker file that was working fine. However to remote debug it , I read that I needed to install dlv on it and then I need to run dlv and pass the parameter of the app I am trying to debug. So after installing dlv on it and attempting to run it. I get the error
exec /dlv: no such file or directory
This is the docker file
FROM golang:1.18-alpine AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -gcflags="all=-N -l" -o fooapp
# Use the official Debian slim image for a lean production container.
FROM debian:buster-slim
EXPOSE 8000 40000
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
#COPY --from=builder /app/fooapp /app/fooapp #commented this out
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
The above results in exec /dlv: no such file or directory
I am not sure why this is happening. Being new to docker , I have tried different ways to debug it. I tried using dive to check and see if the image has dlv on it in the path /dlv and it does. I have also attached an image of it
You built dlv in alpine-based distro. dlv executable is linked against libc.musl:
# ldd dlv
linux-vdso.so.1 (0x00007ffcd251d000)
libc.musl-x86_64.so.1 => not found
But then you switched to glibc-based image debian:buster-slim. That image doesn't have the required libraries.
# find / -name libc.musl*
<nothing found>
That's why you can't execute dlv - the dynamic linker fails to find the proper lib.
You need to build in glibc-based docker. For example, replace the first line
FROM golang:bullseye AS builder
BTW. After you build you need to run the container in the priviledged mode
$ docker build . -t try-dlv
...
$ docker run --privileged --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:51:02Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
In non-priviledged container dlv is not allowed to spawn a child process.
$ docker run --rm try-dlv
API server listening at: [::]:40000
2022-10-30T10:55:46Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
could not launch process: fork/exec /app/fooapp: operation not permitted
Really Minimal Image
You use debian:buster-slim to minimize the image, it's size is 80 MB. But if you need a really small image, use busybox, it is only 4.86 MB overhead.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Retrieve application dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -o fooapp .
# Download certificates
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates
# Use the official Debian slim image for a lean production container.
FROM busybox:glibc
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
COPY --from=builder /etc/ssl /etc/ssl
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/sh"]
The image size is 25 MB, of which 18 MB are from dlv and 2 MB are from Hello World application.
While choosing the images care should be taken to have the same flavors of libc. golang:bullseye links against glibc. Hence, the minimal image must be glibc-based.
But if you want a bit more comfort, use alpine with gcompat package installed. It is a reasonably rich linux with lots of external packages for just extra 6 MB compared to busybox.
FROM golang:bullseye AS builder
# Build Delve for debugging
RUN go install github.com/go-delve/delve/cmd/dlv#latest
# Create and change to the app directory.
WORKDIR /app
ENV CGO_ENABLED=0
# Copy local code to the container image.
COPY . ./
# Retrieve application dependencies.
RUN go mod tidy
# Build the binary.
RUN go build -o fooapp .
# Use alpine lean production container.
# FROM busybox:glibc
FROM alpine:latest
# gcompat is the package to glibc-based apps
# ca-certificates contains trusted TLS CA certs
# bash is just for the comfort, I hate /bin/sh
RUN apk add gcompat ca-certificates bash
EXPOSE 8000 40000
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/fooapp /app/fooapp
# COPY --from=builder /app/ /app
COPY --from=builder /go/bin/dlv /dlv
# Run dlv as pass fooapp as parameter
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/app/fooapp"]
# ENTRYPOINT ["/bin/bash"]
TL;DR
Run apt-get install musl, then /dlv should work as expected.
Explanation
Follow these steps:
docker run -it <image-name> sh
apt-get install file
file /dlv
Then you can see the following output:
/dlv: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, Go BuildID=xV8RHgfpp-zlDlpElKQb/DOLzpvO_A6CJb7sj1Nxf/aCHlNjW4ruS1RXQUbuCC/JgrF83mgm55ntjRnBpHH, not stripped
The confusing no such file or directory (see this question for related discussions) is caused by the missing /lib/ld-musl-x86_64.so.1.
As a result, the solution is to install the musl library by following its documentation.
My answer is inspired by this answer.
The no such file or directory error indicates either your binary file does not exist, or your binary is dynamically linked to a library that does not exist.
As said in this answer, delve is linked against libc.musl. So for your delve build, you can disable CGO since that can result in dynamic links to libc/libmusl:
# Build Delve for debugging
RUN CGO_ENABLED=0 go install github.com/go-delve/delve/cmd/dlv#latest
...
This even allows you later to use a scratch build for your final target image and does not require you to install any additional packages like musl or use any glibc based image and require you to run in privileged mode.
I am trying to install a private python package that was uploaded to an artifact registry inside a docker container (to deploy it on cloudrun).
I have sucessfully used that package in a cloud function in the past, so I am sure the package works.
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/${_PROJECT}/${_SERVICE_NAME}:$SHORT_SHA', '--network=cloudbuild', '.', '--progress=plain']
Dockerfile
FROM python:3.8.6-slim-buster
ENV APP_PATH=/usr/src/app
ENV PORT=8080
# Copy requirements.txt to the docker image and install packages
RUN apt-get update && apt-get install -y cython
RUN pip install --upgrade pip
# Set the WORKDIR to be the folder
RUN mkdir -p $APP_PATH
COPY / $APP_PATH
WORKDIR $APP_PATH
RUN pip install -r requirements.txt --no-color
RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3 # This line is where the bug occurs
# Expose port
EXPOSE $PORT
# Use gunicorn as the entrypoint
CMD exec gunicorn --bind 0.0.0.0:8080 app:app
The permissions I added are:
cloudbuild default service account (project-number#cloudbuild.gserviceaccount.com): Artifact Registry Reader
service account running the cloudbuild : Artifact Registry Reader
service account running the app: Artifact Registry Reader
The cloudbuild error:
Step 10/12 : RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3
---> Running in b2ead00ccdf4
Looking in indexes: https://pypi.org/simple, https://us-west1-python.pkg.dev/muse-speech-devops/gcp-utils/simple/
User for us-west1-python.pkg.dev: [91mERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 167, in exc_logging_wrapper
status = run_func(*args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 340, in run
requirement_set = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 94, in resolve
result = self._result = resolver.resolve(
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
if not criterion.candidates:
File "/usr/local/lib/python3.8/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__
From your traceback log, we can see that Cloud Build doesn't have the credentials to authenticate to the private repo:
Step 10/12 : RUN pip install --extra-index-url https://us-west1-python.pkg.dev/my-project/my-package/simple/ my-package==0.2.3
---> Running in b2ead00ccdf4
Looking in indexes: https://pypi.org/simple, https://us-west1-python.pkg.dev/muse-speech-devops/gcp-utils/simple/
User for us-west1-python.pkg.dev: [91mERROR: Exception: //<-ASKING FOR USERNAME
I uploaded a simple package to a private Artifact Registry repo to test this out when building a container and also received the same message. Since you seem to be authenticating with a service account key, the username and password will need to be stored inside pip.conf:
pip.conf
[global]
extra-index-url = https://_json_key_base64:KEY#LOCATION-python.pkg.dev/PROJECT/REPOSITORY/simple/
This file therefore needs to be available during the build process. Multi-stage docker builds are very useful here to ensure the configuration keys are not exposed, since we can choose what files make it into the final image (configuration keys would only be present while used to download the packages from the private repo):
Sample Dockerfile
# Installing packages in a separate image
FROM python:3.8.6-slim-buster as pkg-build
# Target Python environment variable to bind to pip.conf
ENV PIP_CONFIG_FILE /pip.conf
WORKDIR /packages/
COPY requirements.txt /
# Copying the pip.conf key file only during package downloading
COPY ./config/pip.conf /pip.conf
# Packages are downloaded to the /packages/ directory
RUN pip download -r /requirements.txt
RUN pip download --extra-index-url https://LOCATION-python.pkg.dev/PROJECT/REPO/simple/ PACKAGES
# Final image that will be deployed
FROM python:3.8.6-slim-buster
ENV PYTHONUNBUFFERED True
ENV APP_HOME /app
WORKDIR /packages/
# Copying ONLY the packages from the previous build
COPY --from=pkg-build /packages/ /packages/
# Installing the packages from the copied files
RUN pip install --no-index --find-links=/packages/ /packages/*
WORKDIR $APP_HOME
COPY ./src/main.py ./
# Executing sample flask web app
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
I based the dockerfile above on this related thread, and I could confirm the packages were correctly downloaded from my private Artifact Registry repo, and also that the pip.conf file was not present in the resulting image.
I'm trying to make Dockerfile runs faster by copying whole directory (including vendor, because redownloading dependencies took about 10m+ in 3rd world country where I live), but when I tried to run it, it always redownload vendor again and again, unlike when go mod vendor in local:
FROM golang:1.14-alpine AS builder
RUN apk --update add ca-certificates git make g++
ENV GO111MODULE=on
WORKDIR /app
RUN go get github.com/go-delve/delve/cmd/dlv
COPY . .
RUN go mod vendor
ARG COMMIT_HASH
ENV COMMIT_HASH=${COMMIT_HASH}
ARG BUILD_DATE
ENV BUILD_DATE=${BUILD_DATE}
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build \
-o app
FROM golang:1.14-alpine
WORKDIR /app
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /go/bin/dlv /
COPY --from=builder /app/app .
COPY --from=builder /app/db ./db
EXPOSE 8080 63342
CMD [ "/dlv", "--listen=:63342", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "./app" ]
previously using this (without vendor) also slow:
COPY go.mod .
COPY go.sum .
RUN go mod download -x
COPY . .
trying with this also didn't work:
COPY vendor /go/pkg/mod
COPY vendor /go/pkg/mod/cache/download
COPY go.mod .
COPY go.sum .
RUN go mod download -x
COPY . .
how to force it to use copied vendor directory instead of redownloading again and again?
so the expected behavior are:
when local have vendor (used go mod vendor), the docker build should use it
but when on CI (since vendor/*/* not committed to the repo) or developer that doesn't have vendor/*/* it should probably redownload everything (I don't really care, since they have good bandwidth)
the go mod vendor command is for the CI and devs that haven't use go mod vendor
go mod vendor only download dependency from network if the dependency not ready in local. Otherwise, it will just copy dependency to the folder vendor without access network. So here, your issue comes from the go mod cache not be reused during multiple build.
As a solution, you could use buildkit cache solution, next is a minimal example:
main.go:
package main
import _ "github.com/jeanphorn/log4go"
func main() {
}
Dockerfile:
# syntax = docker/dockerfile:1.3
FROM golang:1.14-alpine AS builder
RUN apk --update add git
ENV GO111MODULE=on
WORKDIR /app
COPY main.go /app
RUN go mod init hello
RUN --mount=type=cache,mode=0755,target=/go/pkg/mod go get github.com/go-delve/delve/cmd/dlv && go get github.com/jeanphorn/log4go
RUN --mount=type=cache,mode=0755,target=/go/pkg/mod go mod vendor
1st Execution:
$ export DOCKER_BUILDKIT=1
$ docker build --progress=plain -t abc:1 . --no-cache
#16 [builder 6/7] RUN --mount=type=cache,mode=0755,target=/go/pkg/mod go get github.com/go-delve/delve/cmd/dlv && go get github.com/jeanphorn/log4go
#16 sha256:ae394bc67787799808175eada48c5f4e09101b6e153d535ddb5e4040fbf74395
#16 1.941 go: downloading github.com/go-delve/delve v1.7.1
#16 4.296 go: found github.com/go-delve/delve/cmd/dlv in github.com/go-delve/delve v1.7.1
......
#16 23.78 go: finding module for package github.com/toolkits/file
#16 23.96 go: downloading github.com/toolkits/file v0.0.0-20160325033739-a5b3c5147e07
#16 24.17 go: found github.com/toolkits/file in github.com/toolkits/file v0.0.0-20160325033739-a5b3c5147e07
#16 DONE 27.3s
2nd Execution:
$ export DOCKER_BUILDKIT=1
$ docker build --progress=plain -t abc:1 . --no-cache
#15 [builder 6/7] RUN --mount=type=cache,mode=0755,target=/go/pkg/mod go get github.com/go-delve/delve/cmd/dlv && go get github.com/jeanphorn/log4go
#15 sha256:bee74f92ceb79cce449b9702c892cb39815461981838f6b63d500414be87c21d
#15 1.467 go: found github.com/go-delve/delve/cmd/dlv in github.com/go-delve/delve v1.7.1
#15 7.511 go: github.com/jeanphorn/log4go upgrade => v0.0.0-20190526082429-7dbb8deb9468
#15 7.533 go: finding module for package github.com/toolkits/file
#15 7.675 go: found github.com/toolkits/file in github.com/toolkits/file v0.0.0-20160325033739-a5b3c5147e07
#15 DONE 8.7s
You could see golang mod cache generated by 1st run already be reused by 2nd run without downloading from internet, now it effects as same when you do it on host.
NOTE: I didn't suggest to directly bind any cache on host to container, it's not portable I think.
I have created one node.js project called simpleWeb. The project contains package.json and index.js.
index.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('How are you doing');
});
app.listen(8080, () => {
console.log('Listening on port 8080');
});
package.json
{
"dependencies": {
"express": "*"
},
"scripts": {
"start": "node index.js"
}
}
I have also created one Dockerfile to create the docker image for my node.js project.
Dockerfile
# Specify a base image
FROM node:alpine
# Install some dependencies
COPY ./ ./
RUN npm install
# Default command
CMD ["npm", "start"]
While I am tried to build the docker image using "docker build ." command it is throwing below error.
Error Logs
simpleweb ยป docker build . ~/Desktop/jaypal/Docker and Kubernatise/simpleweb
[+] Building 16.9s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 8.7s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 418B 0.0s
=> [1/3] FROM docker.io/library/node:alpine#sha256:5b91260f78485bfd4a1614f1afa9afd59920e4c35047ed1c2b8cde4f239dd79b 0.0s
=> CACHED [2/3] COPY ./ ./ 0.0s
=> ERROR [3/3] RUN npm install 8.0s
------
> [3/3] RUN npm install:
#8 7.958 npm ERR! Tracker "idealTree" already exists
#8 7.969
#8 7.970 npm ERR! A complete log of this run can be found in:
#8 7.970 npm ERR! **/root/.npm/_logs/2020-12-24T16_48_44_443Z-debug.log**
------
executor failed running [/bin/sh -c npm install]: exit code: 1
The log file above it is providing one path "/root/.npm/_logs/2020-12-24T16_48_44_443Z-debug.log" where I can find the full logs.
But, The above file is not present on my local machine.
I don't understand what is the issue.
This issue is happening due to changes in NodeJS starting with version 15. When no WORKDIR is specified, npm install is executed in the root directory of the container, which is resulting in this error. Executing the npm install in a project directory of the container specified by WORKDIR resolves the issue.
Use the following Dockerfile:
# Specify a base image
FROM node:alpine
#Install some dependencies
WORKDIR /usr/app
COPY ./ /usr/app
RUN npm install
# Set up a default command
CMD [ "npm","start" ]
Global install
In the event you're wanting to install a package globally outside of working directory with a package.json, you should use the -g flag.
npm install -g <pkg>
This error may trigger if the CI software you're using like semantic-release is built in node and you attempt to install it outside of a working directory.
The correct answer is basically right, but when I tried it still didn't work. Here's why:
WORKDIR specifies the context to the COPY that follows it. Having already specified the context in ./usr/app it is wrong to ask to copy from ./ (the directory you are working in) to ./usr/app as this produces the following structure in the container: ./usr/app/usr/app.
As a result CMD ["npm", "start"], which is followed where specified by WORKDIR (./usr/app) does not find the package.json.
I suggest using this Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY ./ ./
RUN npm install
CMD ["npm", "start"]
You should specify the WORKDIR prior to COPY instruction in order to ensure the execution of npm install inside the directory where all your application files are there. Here is how you can complete this:
WORKDIR /usr/app
# Install some dependencies
COPY ./ ./
RUN npm install
Note that you can simply "COPY ./ (current local directory) ./ (container directory which is now /usr/app thanks to the WORKDIR instruction)" instead of "COPY ./ /usr/app"
Now the good reason to use WORKDIR instruction is that you avoid mixing your application files and directories with the root file system of the container (to avoid overriding file system directories in case you have similar directories labels on your application directories)
One more thing. It is a good practice to segment a bit your configuration so that when you make a change for example in your index.js (so then you need to rebuild your image), you will not need to run "npm install" while the package.json has not been modified.
Your application is very basic, but think of a big applications where "npm install" should take several minutes.
In order to make use of caching process of Docker, you can segment your configuration as follows:
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./
This instructs Docker to cache the first COPY and RUN commands when package.json is not touched. So when you change for instance the index.js, and you rebuild your image, Docker will use cache of the previous instructions (first COPY and RUN) and start executing the second COPY. This makes your rebuild much quicker.
Example for image rebuild:
=> CACHED [2/5] WORKDIR /usr/app 0.0s
=> CACHED [3/5] COPY ./package.json ./ 0.0s
=> CACHED [4/5] RUN npm install 0.0s
=> [5/5] COPY ./ ./
specifying working directory as below inside Dockerfile will work:
WORKDIR '/app'
make sure to use --build in your docker-compose command to build from Dockerfile again:
docker-compose up --build
# Specify a base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./
# Default command
CMD ["npm","start"]
1.This is also write if you change any of your index file and docker build and docker run it automatically change your new changes to your browser output
A bit late to the party, but for projects not wanting to create a Dockerfile for the installer, it is also possible to run the installer from an Ephemeral container. This gives full access to the Node CLI, without having to install it on the host machine.
The command assumes it is run from the root of the project and there is a package.json file present. The -v $(pwd):/app option mounts the current working directory to the /app folder in the container, synchronizing the installed files back to the host directory. The -w /app option sets the work directory of the image as the /app folder. The --loglevel=verbose option causes the output of install command to be verbose. More options can be found on the official Node docker hub page.
docker run --rm -v $(pwd):/app -w /app node npm install --loglevel=verbose
Personally I use a Makefile to store several Ephemeral container commands that are faster to run separate from the build process. But of course, anything is possible :)
Maybe you can change node version.Besides don't forget WORKDIR
FROM node:14-alpine
WORKDIR /usr/app
COPY ./ ./
RUN npm install
CMD ["npm", "start"]
Building on the answer of Col, you could also do the following in your viewmodel:
public class IndexVM() {
#AfterCompose
public void doAfterCompose(#ContextParam(ContextType.COMPONENT) Component c) {
Window wizard = (Window) c;
Label label = (Label) c.getFellow("lblName");
....
}
}
In doing so, you actually have access to the label object and can perform all sorts of tasks with it (label.setValue(), label.getValue(), etc.).
The above-given solutions didn't work for me, I have changed the node image in my from node:alpine to node:12.18.1 and it worked.
On current latest node:alpine3.13 it's just enough to copy the content of root folder into container's root folder with COPY ./ ./, while omitting the WORKDIR command. But as a practical solution I would recommend:
WORKDIR /usr/app - it goes by convention among developers to put project into separate folder
COPY ./package.json ./ - here we copy only package.json file in order to avoid rebuilds from npm
RUN npm install
COPY ./ ./ - here we copy all the files (remember to create .dockerignore file in the root dir to avoid copying your node_modules folder)
We also have a similar issue, So I replaced my npm with 'yarn' it worked quit well. here is the sample code.
FROM python:3.7-alpine
ENV CRYPTOGRAPHY_DONT_BUILD_RUST=1
#install bash
RUN apk --update add bash zip yaml-dev
RUN apk add --update nodejs yarn build-base postgresql-dev gcc python3-
dev musl-dev libffi-dev
RUN yarn config set prefix ~/.yarn
#install serverless
RUN yarn global add serverless#2.49.0 --prefix /usr/local && \
yarn global add serverless-pseudo-parameters#2.4.0 && \
yarn global add serverless-python-requirements#4.3.0
RUN mkdir -p /code
WORKDIR /code
COPY requirements.txt .
COPY requirements-test.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements-test.txt
COPY . .
CMD ["bash"]
Try npm init and npm install express to create package.json file
you can to specify node version less than 15.
# Specify a base image
FROM node:14
# Install some dependencies
COPY ./ ./
RUN npm install
# Default command
CMD ["npm", "start"]
Been stuck on this for the last 3 days. I'm building an image in a docker and
copy command fails due to not finding the right directory.
FROM python:3.6.7-alpine
WORKDIR /usr/src/app
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip3 install -r requirements.txt
COPY . /usr/src/app
CMD python3 manage.py run -h 0.0.0.0
which is run by this docker-dev file:
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile-dev
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_APP=project/__init__.py
- FLASK_ENV=development
and getting this error:
Building users
Step 1/6 : FROM python:3.6.7-alpine
---> cb04a359db13
Step 2/6 : WORKDIR /usr/src/app
---> Using cache
---> 06bb39a49444
Step 3/6 : COPY ./requirements.txt /usr/src/app/requirements.txt
ERROR: Service 'users' failed to build: COPY failed: stat /var/snap/docker/common/var-lib-docker/tmp/docker-builder353668631/requirements.txt: no such file or directory
I don't even know where to start with debugging this. When I tried to access the directory it gave me permission error. So I tried to run the command with sudo which didn't help. Any thoughts ?
Little late to reply, but second COPY command COPY . /usr/src/app replaces the /usr/src/app content generated by RUN pip3 install -r requirements.txt.
Try
FROM python:3.6.7-alpine
WORKDIR /usr/src/app
# install in temp directory
RUN mkdir /dependencies
COPY ./requirements.txt /dependencies/requirements.txt
RUN cd /dependencies && pip3 install -r requirements.txt
COPY . /usr/src/app
# copy generated dependencies
RUN cp -r /dependencies/* /usr/src/app/
CMD python3 manage.py run -h 0.0.0.0
As larsks suggests in his comment, you need the file in the services/users directory. To understand why, an understanding of the "context" is useful.
Docker does not build on the client, it does not see your current directory, or other files on your filesystem. Instead, the last argument to the build command is passed as the build context. With docker-compose, this context defaults to the current directory, which you will often see as . in a docker build command, but you can override that as you've done here with ./services/users as your context. When you run a build, the very first step is to send that build context from the docker client to the server. Even when the client and server are on the same host (a common default, especially for desktop environments), this same process happens. Files listed in .dockerignore, and files in parent directories to the build context are not sent to the docker server.
When you run a COPY or ADD command, the first argument (or all but the last argument when you have multiple) refer to files from the build context, and the last argument is the destination file or directory inside the image.
Therefore, when you put together this compose file entry:
build:
context: ./services/users
dockerfile: Dockerfile-dev
with this COPY command:
COPY ./requirements.txt /usr/src/app/requirements.txt
the COPY will try to copy the requirements.txt file from the build context generated from ./services/users, meaning ./services/users/requirements.txt needs to exist, and not be excluded by a .dockerignore file in ./services/users.
I had a similar problem building an image with beryllium, and I solved this deleting it into the .dockerignore
$ sudo docker build -t apache .
Sending build context to Docker daemon
10.55MB Step 1/4 : FROM centos ---> 9f38484d220f Step 2/4 :
RUN yum install httpd -y
---> Using cache ---> ccdafc4ae476 Step 3/4 :
**COPY ./**beryllium** /var/www/html COPY failed: stat /var/snap/docker/common/var-lib-docker/tmp/docker-builder04301**
$nano .dockerignore
startbootstrap-freelancer-master
run.sh
pro
fruit
beryllium
Bell.zip
remove beryllium from that file
$ sudo docker build -t apache .
Sending build context to Docker daemon 12.92MB
Step 1/4 : FROM centos
---> 9f38484d220f
Step 2/4 : RUN yum install httpd -y
---> Using cache
---> ccdafc4ae476
Step 3/4 : COPY ./beryllium /var/www/HTML
---> 40ebc02992a9
Step 4/4 : CMD apachectl -DFOREGROUND
---> Running in dab0a406c89e
Removing intermediate container dab0a406c89e
---> 1bea741cfb65
Successfully built 1bea741cfb65
Successfully tagged apache:latest