Docker - Deamon Context is not pointing to local directory - docker

I am on MAC and have docker desktop running. My dockerfile looks something like this -
FROM azul/zulu-openjdk:8
ARG buildNumber
COPY build/libs/my-jar${buildNumber}.jar my-jar.jar
EXPOSE 8080
CMD java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dcom.sun.management.jmxremote -noverify ${JAVA_OPTS} -jar my-jar.jar
When I try to build an image docker build -t my-image:0.1 ., the COPY stage fails. What I mean is even though my current directory is /usr/me/projects/my-proj the COPY stage fails with an error message -
COPY failed: stat /var/lib/docker/tmp/docker-builder436046791/build/libs/my-jar.jar: no such file or directory
I would assume that the path I provided was for current dir. But docker is not building on my local machine, but building it remotely some place.
Output of docker context is -
docker context list
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://me-something.hcp.centralus.azmk8s.io:443 (default) swarm
Anyone know what I am doing wrong here?

Related

Quarkus jvm deploy remote docker fail

I deploy my quarkus app with dockerfile to remote docker in my windows computer, but it fails. How should I fix it?
I package my quarkus app successed.
I package it used maven jvm
This is my IDEA docker image setting
docker image setting
This is my dockerfile
FROM registry.access.redhat.com/ubi8/openjdk-17:1.11
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=185 target/quarkus-app/*.jar /deployments/
COPY --chown=185 target/quarkus-app/app/ /deployments/app/
COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 185
ENV AB_JOLOKIA_OFF=""
ENV JAVA_OPTS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
ENV JAVA_APP_JAR="/deployments/quarkus-run.jar"
I get fail reason:
Error response from daemon: COPY failed: file not found in build context or excluded by .dockerignore: stat target/quarkus-app/lib/: file does not exist
Failed to deploy 'bigquarkus Dockerfile: src/main/docker/Dockerfile.jvm': Can't retrieve image ID from build stream
I find I mistake my build context folder, so the maven can not find the .dockerignore file. When I set my context folder to . , I can build and push my docker image.

How to share folder with host when using `gitlab-runner` with docker?

On a Linux system I am running a simple test job from the command line using the following command:
gitlab-runner exec docker --builds-dir /home/project/buildsdir test_job
with the following job definition in .gitlab-ci.yml:
test_job:
image: python:3.8-buster
script:
- date > time.dat
However, the build folder is empty. after having run the job. I only can imaging that build-dir means a location inside the docker image.
Also after having run the job successfully I am doing
docker image ls
and I do not see a recent image.
So how can I "share"/"mount" the actual build folder for the docker gitlab job to the hosts system so I can access all the output files?
I looked at the documentation and I found nothing, the same for
gitlab-runner exec docker --help
I also tried to use artifcats
test_job:
image: python:3.8-buster
script:
- pwd
- date > time.dat
artifacts:
paths:
- time.dat
but that also did not help. I was not able to find the file time.dat anywhere after the completion of the job.
I also tried to use docker-volumes:
gitlab-runner exec docker --docker-volumes /home/project/buildsdir/:/builds/project-0 test_job
gitlab-runner exec docker --docker-volumes /builds/project-0:/home/project/buildsdir/ test_job
but neither worked (job failed in both cases).
you have to configure your config.toml file located at /etc/gitlab-runner/
here's the doc: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section
first add a build_dir and mention it in the volumes at the end bind it with a
directory on your host machine like this:
build_dir = "(Your build dir)"
[runners.docker]
volumes = ["/tmp/build-dir:/(build_dir):rw"]

Docker volumes not mounting/linking

I'm in Docker Desktop for Windows. I am trying to use docker-compose as a build container, where it builds my code and then the code is in my local build folder. The build processes are definitely succeeding; when I exec into my container, the files are there. However, nothing happens with my local folder -- no build folder is created.
docker-compose.yml
version: '3'
services:
front_end_build:
image: webapp-build
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
- "./build:/srv/build"
Dockerfile
FROM node:8.10.0-alpine
EXPOSE 5000
# add files from local to container
ADD . /srv
# navigate to the directory
WORKDIR /srv
# install dependencies
RUN npm install --pure-lockfile --silent
# build code (to-do: get this code somewhere where we can use it)
RUN npm run build
# install 'serve' and launch server.
# note: this is just to keep container running
# (so we can exec into it and check for the files).
# once we know that everything is working, we should delete this.
RUN npx serve -s -l tcp://0.0.0.0:5000 build
I also tried removing the final line that serves the folder. Then I actually did get a build folder, but that folder was empty.
UPDATE:
I've also tried a multi-stage build:
FROM node:12.13.0-alpine AS builder
WORKDIR /app
COPY . .
RUN yarn
RUN yarn run build
FROM node:12.13.0-alpine
RUN yarn global add serve
WORKDIR /app
COPY --from=builder /app/build .
CMD ["serve", "-p", "80", "-s", "."]
When my volumes aren't set (or are set to, say, some nonexistent source directory like ./build:/nonexistent), the app is served correctly, and I get an empty build folder on my local machine (empty because the source folder doesn't exist).
However when I set my volumes to - "./build:/app" (the correct source for the built files), I not only wind up with an empty build folder on my local machine, the app folder in the container is also empty!
It appears that what's happening is something like
1. Container is built, which builds the files in the builder.
2. Files are copied from builder to second container.
3. Volumes are linked, and then because my local build folder is empty, its linked folder on the container also becomes empty!
I've tried resetting my shared drives credentials, to no avail.
How do I do this?!?!
I believe you are misunderstanding how host volumes work. The volume definition:
./build:/srv/build
In the compose file will mount ./build from the host at /srv/build inside the container. This happens at run time, not during your image build, so after the Dockerfile instructions have been performed. Nothing from the image is copied out to the host, and no files in the directory being mounted in top of will be visible (this is standard behavior of the Linux mount command).
If you need files copied back out of the container to the host, there are various options.
You can perform your steps to populate the build folder as part of the container running. This is common for development. To do this, your CMD likely becomes a script of several commands to run, with the last step being an exec to run your app.
You can switch to a named volume. Docker will initialize these with the contents of the image. It's even possible to create a named bind mount to a folder on your host, which is almost the same as a host mount. There's an example of a named bind mount in my presentation here.
Your container entrypoint can copy the files to the host mount on startup. This is commonly seen on images that will run in unknown situations, e.g. the Jenkins image does this. I also do this in my save/load volume scripts in my example base image.
tl;dr; Volumes aren't mounted during the build stage, only while running a container. You can run the command docker run <image id> -v ./build/:/srv/build cp -R /app /srv/build to copy the data to your local disk
While Docker is building the image it is doing all actions in ephemeral containers, each command that you have in your Dockerfile is run in a separate container, each making a layer that eventually becomes the final image.
The result of this is that the data flow during the build is unidirectional, you are unable to mount a volume from the host into the container. When you run a build you will see Sending build context to Docker daemon, because your local Docker CLI is sending the context (the path you specified after the docker build, ususally . which represents the current directory) to the Docker daemon (the process that actually does the work). One key point to remember is that the Docker CLI (docker) doesn't actually do any work, it just sends commands to the Docker Daemon dockerd. The build stages shouldn't change anything on your local system, the container is designed to encapsulate the changes only into the container image, and give you a snapshot of the build that you can reuse consistently, knowing that the contents are the same.

How can I copy files from docker container to my host computer?

This may seem like duplicate question? But I think it's not, and also. I have tried google.
This is the situation, I want to build a CI pipeline with Gogs and Drone, with their nice documents, I have build it up. But now, I am stucking on: How to copy files from container to the host in the container. below is my drone yml config.
pipeline:
build:
image: node:7
commands:
- cd client
- npm config set registry https://registry.npm.taobao.org
- npm install --no-optional
- yarn run build
- sudo docker cp $(sudo docker ps -alq):$PWD/build /var/www/react/
The CI ends with error:
/bin/sh: sudo: not found
after try without sudo, error continues:
/bin/sh: docker: not found
The answers I found are all about copy files to host from container which the shell runs in host computer. but now the commands I tried to run is in the container, so What should i do? or I miss something.
you could mount a volume on the container - a volume is a directory mapping from the host to the container - a shared folder between the two
In the container, you would then copy the files you want to the shared folder and the host could access them in the shared folder.
Don't copy the files into the shared folder before mounting has finished. If you do, the files will not be accessible.
Here's the docs on volumes:
https://docs.docker.com/storage/volumes/#create-and-manage-volumes

running a script in a docker container

Just new to Docker so please bear with me. I'm trying to use docker-compose with an Alpine Dockerfile. Ideally I'd spin the alpine image up and have it continue to run
FROM alpine:edge
I have a shell script in my mounted volume in my docker-compose.yml
version: '3'
services:
solr-service:
build: ./solr-service
volumes:
- /Users/asdf/customsolr/trunk:/asdf/customsolr/trunk
ports:
- 8801:8801
Referenced within the mounted volume at /Users/asdf/customsolr/trunk/startsolr.sh I have a script which I've tried all kinds of approaches to run and stay running. Basically if I run this locally on my own machine outside of Docker it spins up the files in its dir in a mini custom jetty instance. When I try to invoke the script thru RUN or CMD the docker container has already finished or it cannot not find the needed startup.jar.
#!/bin/sh
export DEBUG_ARGS=''
export DEBUG_ARGS='-Xdebug -
Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8909'
java -Xms512M -Xmx1024M $DEBUG_ARGS -Dfile.encoding=UTF8 -server -Dsolr.solr.home=cores -Dsolr.useFilterForSort=false -Djetty.home=_container -Djetty.logs=_container/logs -jar _container/start.jar
Any ideas of what I'm doing wrong ?
Alpine image doesn't come with java installed, so unless you're adding it to your image inside Dockerfile, I'd say that the script just finishes and your container exists immediately.
It would be helpful if you provide your Dockerfile, full docker-compose.yml and log from container.

Resources