I'm following the instructions on Docker's website on building a parent image. I'm very new to Docker. I'm on a CentOS 7.5.
I ran the mkimage-yum.sh script suggested on the Docker website for CentOS. I didn't understand why the last line of the script, rm -rf "$target" was there because it seems to delete all the work done by the script. So I commented it out and it leaves a directory /tmp/mkimage-yum.sh.ahE8xx, which looks like a minimal linux image with the typical linux file structure (e.g. /usr/,/etc/)
In my home directory, I compiled the program,
main.c :
#include <stdio.h>
#include <stdlib.h>
int main(void){
printf("Hello Docker World!\n");
return 0;
}
Using gcc -static -static-libgcc -static-libstdc++ -o hello main.c, I compiled the code to a statically linked executable as prescribed in the docker webpage.
I created the Dockerfile,
e.g.
FROM scratch
ADD hello /
CMD ["/hello"]
I start up the dockerd server and in a separate terminal I run docker build --tag hello .
The output is :
Sending build context to Docker daemon 864.8 kB
Step 1/3 : FROM scratch
--->
Step 2/3 : ADD hello /
---> Using cache
---> a38d49d40e50
Step 3/3 : CMD /hello
---> Using cache
---> 3bcbb04c367f
Successfully built 3bcbb04c367f
Gee whiz, it looks like it worked! However, I still only see Dockerfile hello main.c in the directory I did this. Docker clearly thinks it did something, but what? It didn't create any new files.
Now I run docker run --rm hello and it outputs Hello Docker World!.
However, I get disconcerting errors from the dockerd server:
ERRO[502548] containerd: deleting container error=exit status 1: "container f336b3a5505879453b4f7a00c06acf274d0a5f8b3d260762273a2d7c0a846141 does not exist\none or more of the container deletions failed\n"
WARN[502548] f336b3a5505879453b4f7a00c06acf274d0a5f8b3d260762273a2d7c0a846141 cleanup: failed to unmount secrets: invalid argument
QUESTIONS :
What exactly did docker build --tag hello . do? I see no output from this.
What are the dockerd errors all about? Maybe looking for the docker image not created by docker build?
How does the mkimage-yum.sh fit into this? Why does it delete all the work that it does at the end.
When you run --rm with docker run, the container will run then delete itself. If you want to keep the container, remove the --rm from the command.
The command docker build will grab the dockerfile and create a local image, in you case, from scratch with the image name of hello.
You will not see any additional files created in your folder. To see the created image, run command docker images. Here you should be able to see your image built with tag hello.
When you run docker run <imagename> it starts up a container with the provided image. That's why you see your C program's output
Related
I am studying on Docker these days and confused that why RUN pwd just does not seem to work while running my docker file.
I am working on IOS
and the full content of my docker file can be seen as below:
FROM ubuntu:latest
MAINTAINER xxx
RUN mkdir -p /ln && echo hello world > /ln/wd6.txt
WORKDIR /ln
RUpwd
CMD ["more" ,"wd6.txt"]
as far as my understanding,
after building the docker image with the tag 'wd8'and running it, I supposed the result should show like this
~ % docker run wd8
::::::::::::::
wd6.txt
::::::::::::::
hello world
ln
however, the fact is without ln.
I have tried with RUN $pwd, and also added ENV at the beginning of my dockerfile, both do not work.
Please help point out where the problem is.
ps: so I should not expect to see the directory 'ln' on my disk, right? since it is supposed to be created within the container...?
enter image description here
1227
There are actually multiple reasons you don't see the output of the pwd command, some of them already mentioned in the comments:
the RUN statements in your Dockerfile are only executed during the build stage, i.e. using docker build and not with docker run
when using the BuildKit backend (which is the case here) the output of successfully run commands is collapsed; to see them anyway use the --progress=plain flag
running the same build multiple times will use the build cache of the previous build and not execute the command again; you can disable this with the --no-cache flag
I am trying to debug some issues in my GitLab CI pipeline. I have a step B which is using some artifacts from step A.
Step A is very long (and is working in the CI), so I don't want to run it locally: I just download the resulting artifacts from GitLab. So I have an artifacts.zip, which I extracted to obtain an output and a logs directory. So far so good.
I want to run step B locally, using gitlab-runner. Note that I am using version 9.5 (https://docs.gitlab.com/runner/install/old.html).
I am using this command:
gitlab-runner exec docker step-b
As I explained, step-b needs the artifacts from step-a. This is what I tried:
gitlab-runner exec docker --docker-volumes ~/Downloads/output step-b
One of the script executed in step B is doing something like mv ../output /some/where/else. However, this script fails with the following error:
mv: cannot stat '../output': No such file or directory
Following this error, I have two questions:
Where is this script executed? It's called like that from the .gitlab-ci.yml:
./scripts/my_script.sh.
What is the . in this context?
How can I make sure that using the --docker-volumes ~/Downloads/output will mount the directory in the right place so my script can find it?
EDIT
As requested, here is a description of step A.
script:
- mkdir -p /usr/local/compil_result
- ./scripts/compil.sh
- mv /usr/local/compil_result ./output
artifacts:
paths:
- output
- logs
Since you're not mentioned what docker image you use, I assume it's a custom image you or your colleague have made. I think you need to check the Dockerfile of your docker image back, to make sure where the working directory of that script is.
Or you could also try to get inside the shell, and see the structure inside your container first,
docker run --rm -it --entrypoint /bin/bash your-image-name
To mount a Docker volume, you need to host directory and container directory separated by a colon, and use full path directory for both of them.
Something like this,
gitlab-runner exec docker --docker-volumes '/home/username/Downloads/output:/output' step-b
I am new to docker, and follows the instructions at https://docs.docker.com/develop/develop-images/baseimages/ to create a docker image and tried to run:
My docker file is as follows:
FROM scratch
ADD hello.sh /
CMD ["/hello.sh"]
The hello.sh file is as follows. I have applied dos2unix to hello.sh to ensure the right encoding:
#!/bin/sh
echo "this is a test"
I followed the instruction in the doc to run the following command to build an image:
docker build --tag hello .
Then when I ran docker run --rm hello I got the following error:
[FATAL tini (8)] exec /hello.sh failed: No such file or directory
Have searched online and tried solutions from various posts. But none of them worked. Any insights on where I did wrong?
related non-helpful threads:
1. https://forums.docker.com/t/standard-init-linux-go-175-exec-user-process-caused-no-such-file/20025/4
Building an image from 'scratch' means your resulting container is just an empty filesystem. Especially being new to Docker, you should build from a small image like 'alpine' instead of 'scratch' then run your script using sh.
If you are set on building from scratch you will need to compile your own binary then add it as the ENTRYPOINT or CMD of the image Install Bash on scratch Docker image
docker documentation example on building from scratch https://docs.docker.com/develop/develop-images/baseimages/
I have a docker image with the following dockerfile code:
FROM scratch
RUN echo "Hello World - Dockerfile"
And I build my image in a powershell prompt like this:
docker build -t imagename .
Here is what I do when I build my image :
Sending build context to Docker daemon 194.5MB
Step 1/2 : FROM scratch
--->
Step 2/2 : RUN echo "Hello World - Dockerfile"
---> Running in 42d5e5add10e
invalid reference format
I want to run my image with a windows container.
What is missing to make it work?
Thanks
Your image doesn't have a command called echo.
A FROM scratch image contains absolutely nothing at all. No shells, no libraries, no system programs, nothing. The two most common uses for it are to build a base image from a tar file or to build an extremely minimal image from a statically-linked binary; both are somewhat advanced uses.
Usually you'll want to start from an image that contains a more typical set of operating system tools. On a Linux base (where I'm more familiar) ubuntu and debian are common, alpine as well (though it has some occasionally compatibility issues). #gp. suggests FROM microsoft/windowsservercore in a comment and that's probably a good place to start for a Windows container.
I have a simple Java server app with a Gradle build. It works perfectly with gradle run on my host machine. However, I want to build this in a docker image and run as a docker container.
I'm using docker-machine (version 0.13.0):
docker-machine create --driver virtualbox --virtualbox-memory 6000 default
docker-machine start
eval $(docker-machine env default)
I have the following Dockerfile image build script in ./serverapp/Dockerfile:
FROM gradle:4.3-jdk-alpine
ADD . /code
WORKDIR /code
CMD ["gradle", "--stacktrace", "run"]
I can build perfectly:
➜ docker build -t my-server-app .
Sending build context to Docker daemon 310.3kB
Step 1/4 : FROM gradle:4.3-jdk-alpine
---> b803ec92baec
Step 2/4 : ADD . /code
---> Using cache
---> f458b0be79dc
Step 3/4 : WORKDIR /code
---> Using cache
---> d98d04eda627
Step 4/4 : CMD ["gradle", "--stacktrace", "run"]
---> Using cache
---> 869262257870
Successfully built 869262257870
Successfully tagged my-server-app:latest
When I try to run this image:
➜ docker run --rm my-server-app
FAILURE: Build failed with an exception.
* What went wrong:
Could not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
> Could not create service of type CrossBuildFileHashCache using BuildSessionScopeServices.createCrossBuildFileHashCache().
* Try:
Run with --info or --debug option to get more log output.
* Exception is:
org.gradle.internal.service.ServiceCreationException: Could not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
at org.gradle.internal.service.DefaultServiceRegistry$FactoryMethodService.invokeMethod(DefaultServiceRegistry.java:797)
<snip>
... 60 more
Caused by: org.gradle.api.UncheckedIOException: Failed to create parent directory '/code/.gradle/4.3' when creating directory '/code/.gradle/4.3/fileHashes'
at org.gradle.util.GFileUtils.mkdirs(GFileUtils.java:271)
at org.gradle.cache.internal.DefaultPersistentDirectoryStore.open(DefaultPersistentDirectoryStore.java:56)
Why would it have trouble creating that directory?
This should be a very easy task, can anyone tell me how they get this simple scenario working?
FYI, running current versions of everything. I'm using Gradle 4.3.1 on my host, and the official Gradle 4.3 base image from docker hub, I'm using the current version of JDK 8 on my host and the current version of docker, docker-machine, and docker-compose as well.
The fix was to specify --chown=gradle permissions on the /code directory in the Dockerfile. Many Docker images are designed to run as root, the base Gradle image runs as user gradle.
FROM gradle:4.3-jdk-alpine
ADD --chown=gradle . /code
WORKDIR /code
CMD ["gradle", "--stacktrace", "run"]
Ethan Davis suggested using /home/gradle rather than code. That would probably work as well, but I didn't think of that.
The docker image maintainer should have a simple getting started type reference example that shows the recommended way to get basic usage.
Based on the openjdk base image to the gradle image we can see that gradle projects are setup to run in /home/gradle. Check the code out here. gradle run is having trouble running in your new working directory, /code, because the .gradle folder is in the /home/gradle folder. If you copy/add your code into /home/gradle you should be able to run gradle run. This worked for me.