I need to use SSH keys inside a container during build stage and I do that with
RUN echo "${SSH_KEY}" > /root/.ssh/id_rsa
Where SSH_KEY is build arg. The problem is, once this command is done, the output is messed up:
=> [internal] load build definition from Dockerfile 0.0s
[+] Building 733.0s (21/22)
=> [internal] load build definition from Dockerfile 0.0s
[+] Building 733.2s (21/22)
=> [internal] load build definition from Dockerfile 0.0s
[+] Building 733.3s (21/22)
=> [internal] load build definition from Dockerfile 0.0s
[+] Building 733.5s (21/22)
=> [internal] load build definition from Dockerfile 0.0s
[+] Building 733.6s (21/22)
=> [internal] load build definition from Dockerfile 0.0s
[+] Building 733.6s (22/22) FINISHED
Above is printed repeatedly until the build is done. Is there anything I can do about that?
Otherwise, the container building works fine.
As commenters suggested, using --mount=type=ssh flag for RUN git clone lines works a lot better.
Related
I have springrestapi project setup in my local with Dockerfile and docker-compose.yml file successfully running. Now I have added my api tests as part of this project inside the same repository by adding a new directory called in-memory-tests.
in-memory-tests directory has Dockerfile in it. This Dockerfile has commands to copy to image. when i run docker-compose.yml file. its giving below error.
[+] Running 0/1
⠿ testserv1 Error 5.1s
[+] Building 3.1s (8/10)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/maven:3.6.0-jdk-8-alpine 3.0s
=> CACHED [1/6] FROM docker.io/library/maven:3.6.0-jdk-8-alpine#sha256:c1439df43e994b9df98063458e704384b85914c8bef4c1de22f992f51dcc2d79 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 879B 0.0s
=> CACHED [2/6] COPY src /app/src 0.0s
=> ERROR [3/6] COPY testng.xml /app/ 0.0s
=> ERROR [4/6] COPY reports/testreport.html /app/reports/ 0.0s
[3/6] COPY testng.xml /app/:
[4/6] COPY reports/testreport.html /app/reports/:
failed to solve: failed to compute cache key: "/reports/testreport.html" not found: not found
Github repo link
2: []
2https://github.com/aamirsuhailo1/SpringRestAPIsOnDocker/tree/testframework_addition
Got resolved after adding
context: ./in-memory-tests/ and removing dockerfile: in-memory-tests/Dockerfile
You didn't set a build context in docker-compose.yml > testserv1 > build. If you provide a dockerfile property you must also set the context.
Alternatively you can just set the build property, to the directory that contains a Dockerfile, in your case build: . should suffice, as the docker-compose.yml is in the same directory as the Dockerfile.
I'm trying to unit test my pyspark code using pytest but can't figure out the proper steps and method of installation. I was able to get this working locally on my Mac using this tutorial. I've tried 2 methods to accomplish this:
Try to replicate what I did on my Mac in the Dockerfile. i.e. install pypark, apache-spark, java 8, scala, pytest, and make sure I get the ENV paths correct.
Use an image from docker like bitnami.
I attempted (1) but could not find the right RUN command to install java properly.
For (2), is there any way in the Dockerfile for me to install bitnami separately from pytest since bitnami does not give root access?
Note:
Bitnami does not put py4j in the PYTHONPATH so I had to add this line to the docker file:
ENV PYTHONPATH="${SPARK_HOME}/python/lib/py4j-0.10.9.3-src.zip:${PYTHONPATH}"
How about building your image FROM bitnami:spark and adding pytest?
I created test_spark.py:
from pyspark.sql import SparkSession
def test1():
spark = SparkSession.builder.getOrCreate()
data = spark.sql("SELECT 1").collect()
assert data == [(1,)]
and a Dockerfile:
FROM bitnami/spark:latest
RUN pip install pytest py4j
COPY test_spark.py .
CMD python -m pytest test_spark.py
Now I can build and run my container and execute the pytests:
docker build . -t pytest_spark && docker run pytest_spark
[+] Building 0.1s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/bitnami/spark:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 35B 0.0s
=> [1/3] FROM docker.io/bitnami/spark:latest 0.0s
=> CACHED [2/3] RUN pip install pytest py4j 0.0s
=> CACHED [3/3] COPY test_spark.py . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:33b5f945afb750aecb0a8e1b2e811eb71b2bb2e67752e1b73a2c321bcc433841 0.0s
=> => naming to docker.io/library/pytest_spark 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
08:13:35.34
08:13:35.34 Welcome to the Bitnami spark container
08:13:35.35 Subscribe to project updates by watching https://github.com/bitnami/containers
08:13:35.35 Submit issues and feature requests at https://github.com/bitnami/containers/issues
08:13:35.35
============================= test session starts ==============================
platform linux -- Python 3.8.15, pytest-7.2.0, pluggy-1.0.0
rootdir: /opt/bitnami/spark
collected 1 item
test_spark.py . [100%]
============================== 1 passed in 10.11s ==============================
I have followed these steps and when I run PS C:\dockeragent> docker build -t dockeragent:latest .
I get
[+] Building 0.8s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 31B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for mcr.microsoft.com/windows/servercore:ltsc2019 0.7s
------
> [internal] load metadata for mcr.microsoft.com/windows/servercore:ltsc2019:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: no match for platform in manifest sha256:etcetc: not found
I am using VSC with Docker extention on my local computer. How can I build this image?
From the log, the image is build from a windows image(windows/servercore:ltsc2019).
You need to check if the docker desktop on your local machine is running on Windows containers.
If no, you need to switch it to Windows containers.
I have a test automation project which gets uses the code built as part of jar file and that jar gets invoked via bat file. All these files are stored within my project folder.
contents of my Docker file:
FROM maven:3.8.1-adoptopenjdk-11
#WORKDIR C:/Work/Kickstart_TEM/Prefs
COPY Prefs /home/Prefs
COPY KickStart.jar /home/Prefs/KickStart.jar
CMD home\prefs\run.bat && cmd
docker build generates following output
[+] Building 0.3s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 210B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/maven:3.8.1-adoptopenjdk-11 0.0s
=> [1/3] FROM docker.io/library/maven:3.8.1-adoptopenjdk-11 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 390B 0.0s
=> CACHED [2/3] COPY Prefs /home/Prefs 0.0s
=> CACHED [3/3] COPY KickStart.jar /home/Prefs/KickStart.jar 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:4c878e8a895b2fad307e00f1b2fb5c9b5df7dc630e87414230d1989b75a5ee17 0.0s
=> => naming to docker.io/library/demo2
Docker run generates following error:
PS C:\Work\Docker_POC> docker run -i -p 4044:4044 demo2
/bin/sh: 1: homeprefsrun.bat: not found
My containers stops right away, so I am not even able to figure out if my files and folders got copied successfully or not. And I am unsure of how to resolve this error.
First of all, you're trying to run a batch script under Linux (the docker image you're using determines this).
In general, your CMD statement should look like CMD ["/bin/sh", "-c", "/home/Prefs/run.sh && cmd"] (although I'm not sure what cmd is and why you want to run it)
You should convert this batch script (run.bat) to a shell script. Also, there is a difference between home and /home and filenames are case-sensitive (thus it's Prefs and not prefs).
I have a CI script that builds Dockerfiles. My plan is that unit tests should be run in a test stage in each Dockerfile, for example:
FROM alpine AS build
WORKDIR /app
COPY src .
...
FROM build AS test
RUN mvn clean test
FROM build AS package
COPY --from=build ...
So, for a given Dockerfile, I would like to check if it has a test stage and, if so, run docker build --target test .... If it doesn't have a test stage, I don't want to run docker build (which would fail).
How can I check if a Dockerfile contains a certain stage without actually building it?
I do realize this question has some XY problem vibes to it, so feel free to enlighten me. But I also think the question can be generally useful anyway.
I'm going to shy away from trying to parse the Dockerfile since there are a lot of ways to inject false positives or negatives. E.g.
RUN echo \
FROM base as test
or
FROM base \
as test
So instead, I'm going to favor letting docker do the hard work, and modifying the file to not fail on a missing test. This can be done by adding a test stage to a file even when it already as a test stage. Whether you want to put this at the beginning or end of the Dockerfile depends on whether you are running buildkit:
$ cat df.dup-target
FROM busybox as test
RUN exit 1
FROM busybox as test
RUN exit 0
$ DOCKER_BUILDKIT=0 docker build --target test -f df.dup-target .
Sending build context to Docker daemon 20.99kB
Step 1/2 : FROM busybox as test
---> be5888e67be6
Step 2/2 : RUN exit 1
---> Running in 9f96f42bc6d8
The command '/bin/sh -c exit 1' returned a non-zero code: 1
$ DOCKER_BUILDKIT=1 docker build --target test -f df.dup-target .
[+] Building 0.1s (6/6) FINISHED
=> [internal] load build definition from df.dup-target 0.0s
=> => transferring dockerfile: 114B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [test 1/2] FROM docker.io/library/busybox 0.0s
=> CACHED [test 2/2] RUN exit 0 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8129063cb183c1c1aafaf3eef0c8671e86a54f795092fa7a918145c14da3ec3b 0.0s
Then you could append the always successful test at the beginning or end, passing that modified Dockerfile to stdin for the docker build to process:
$ cat df.simple
FROM busybox as build
RUN exit 0
$ cat - df.simple <<EOF | DOCKER_BUILDKIT=1 docker build --target test -f - .
FROM busybox as test
RUN exit 0
EOF
[+] Building 0.1s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 109B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [test 1/2] FROM docker.io/library/busybox 0.0s
=> CACHED [test 2/2] RUN exit 0 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8129063cb183c1c1aafaf3eef0c8671e86a54f795092fa7a918145c14da3ec3b 0.0s
This is a simple grep invocation:
egrep -i -q '^FROM .* AS test$' Dockerfile
You also might consider running your unit tests outside of Docker, before you start building containers. (Or, if your CI system supports running steps inside containers, use a container to get a language runtime, but not necessarily run the Dockerfile.) You'll still need a Docker-based setup to run larger integration tests, but you can run these on your built production-ready containers.