I have a test automation project which gets uses the code built as part of jar file and that jar gets invoked via bat file. All these files are stored within my project folder.
contents of my Docker file:
FROM maven:3.8.1-adoptopenjdk-11
#WORKDIR C:/Work/Kickstart_TEM/Prefs
COPY Prefs /home/Prefs
COPY KickStart.jar /home/Prefs/KickStart.jar
CMD home\prefs\run.bat && cmd
docker build generates following output
[+] Building 0.3s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 210B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/maven:3.8.1-adoptopenjdk-11 0.0s
=> [1/3] FROM docker.io/library/maven:3.8.1-adoptopenjdk-11 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 390B 0.0s
=> CACHED [2/3] COPY Prefs /home/Prefs 0.0s
=> CACHED [3/3] COPY KickStart.jar /home/Prefs/KickStart.jar 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:4c878e8a895b2fad307e00f1b2fb5c9b5df7dc630e87414230d1989b75a5ee17 0.0s
=> => naming to docker.io/library/demo2
Docker run generates following error:
PS C:\Work\Docker_POC> docker run -i -p 4044:4044 demo2
/bin/sh: 1: homeprefsrun.bat: not found
My containers stops right away, so I am not even able to figure out if my files and folders got copied successfully or not. And I am unsure of how to resolve this error.
First of all, you're trying to run a batch script under Linux (the docker image you're using determines this).
In general, your CMD statement should look like CMD ["/bin/sh", "-c", "/home/Prefs/run.sh && cmd"] (although I'm not sure what cmd is and why you want to run it)
You should convert this batch script (run.bat) to a shell script. Also, there is a difference between home and /home and filenames are case-sensitive (thus it's Prefs and not prefs).
Related
Let's say, I have this easy docker-compose.yml:
services:
foo:
container_name: bar
build: .
for the sake of completeness, I also want to share that simple Dockerfile:
FROM ubuntu:20.04
My folder structure is
temp
|- docker-compose.yml
|- Dockerfile
When I build the container with docker compose build, it gives no errors:
[+] Building 0.1s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 31B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:20.04 0.0s
=> CACHED [1/1] FROM docker.io/library/ubuntu:20.04 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:87da********************************************************bb35 0.0s
=> => naming to docker.io/library/temp-foo 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
The outpot of that is identical to docker build . -t foo. When I run it with docker run -dt --name bar foo.
When I now issue docker compose up, I get bar exited with code 0:
[+] Running 2/2
- Network temp_default Created 0.0s
- Container bar Created 1.9s
Attaching to bar
bar exited with code 0
How is this possible or even make sense. Shouldn't both fail or work?
I can share the Dockerfile if needed, it's close to that one.
I have springrestapi project setup in my local with Dockerfile and docker-compose.yml file successfully running. Now I have added my api tests as part of this project inside the same repository by adding a new directory called in-memory-tests.
in-memory-tests directory has Dockerfile in it. This Dockerfile has commands to copy to image. when i run docker-compose.yml file. its giving below error.
[+] Running 0/1
⠿ testserv1 Error 5.1s
[+] Building 3.1s (8/10)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/maven:3.6.0-jdk-8-alpine 3.0s
=> CACHED [1/6] FROM docker.io/library/maven:3.6.0-jdk-8-alpine#sha256:c1439df43e994b9df98063458e704384b85914c8bef4c1de22f992f51dcc2d79 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 879B 0.0s
=> CACHED [2/6] COPY src /app/src 0.0s
=> ERROR [3/6] COPY testng.xml /app/ 0.0s
=> ERROR [4/6] COPY reports/testreport.html /app/reports/ 0.0s
[3/6] COPY testng.xml /app/:
[4/6] COPY reports/testreport.html /app/reports/:
failed to solve: failed to compute cache key: "/reports/testreport.html" not found: not found
Github repo link
2: []
2https://github.com/aamirsuhailo1/SpringRestAPIsOnDocker/tree/testframework_addition
Got resolved after adding
context: ./in-memory-tests/ and removing dockerfile: in-memory-tests/Dockerfile
You didn't set a build context in docker-compose.yml > testserv1 > build. If you provide a dockerfile property you must also set the context.
Alternatively you can just set the build property, to the directory that contains a Dockerfile, in your case build: . should suffice, as the docker-compose.yml is in the same directory as the Dockerfile.
I'm trying to unit test my pyspark code using pytest but can't figure out the proper steps and method of installation. I was able to get this working locally on my Mac using this tutorial. I've tried 2 methods to accomplish this:
Try to replicate what I did on my Mac in the Dockerfile. i.e. install pypark, apache-spark, java 8, scala, pytest, and make sure I get the ENV paths correct.
Use an image from docker like bitnami.
I attempted (1) but could not find the right RUN command to install java properly.
For (2), is there any way in the Dockerfile for me to install bitnami separately from pytest since bitnami does not give root access?
Note:
Bitnami does not put py4j in the PYTHONPATH so I had to add this line to the docker file:
ENV PYTHONPATH="${SPARK_HOME}/python/lib/py4j-0.10.9.3-src.zip:${PYTHONPATH}"
How about building your image FROM bitnami:spark and adding pytest?
I created test_spark.py:
from pyspark.sql import SparkSession
def test1():
spark = SparkSession.builder.getOrCreate()
data = spark.sql("SELECT 1").collect()
assert data == [(1,)]
and a Dockerfile:
FROM bitnami/spark:latest
RUN pip install pytest py4j
COPY test_spark.py .
CMD python -m pytest test_spark.py
Now I can build and run my container and execute the pytests:
docker build . -t pytest_spark && docker run pytest_spark
[+] Building 0.1s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/bitnami/spark:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 35B 0.0s
=> [1/3] FROM docker.io/bitnami/spark:latest 0.0s
=> CACHED [2/3] RUN pip install pytest py4j 0.0s
=> CACHED [3/3] COPY test_spark.py . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:33b5f945afb750aecb0a8e1b2e811eb71b2bb2e67752e1b73a2c321bcc433841 0.0s
=> => naming to docker.io/library/pytest_spark 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
08:13:35.34
08:13:35.34 Welcome to the Bitnami spark container
08:13:35.35 Subscribe to project updates by watching https://github.com/bitnami/containers
08:13:35.35 Submit issues and feature requests at https://github.com/bitnami/containers/issues
08:13:35.35
============================= test session starts ==============================
platform linux -- Python 3.8.15, pytest-7.2.0, pluggy-1.0.0
rootdir: /opt/bitnami/spark
collected 1 item
test_spark.py . [100%]
============================== 1 passed in 10.11s ==============================
This question already has answers here:
Why is docker build not showing any output from commands?
(6 answers)
Closed 11 months ago.
The answers here don't seem to work. The answer here also doesn't work. I suspect something has changed about Docker's build engine since then.
My Dockerfile:
FROM node:16.14.2-alpine
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
RUN yarn
COPY dist .
EXPOSE $SEEDSERV_PORT
RUN pwd
RUN echo "output"
RUN ls -alh
RUN contents="$(ls -1 /usr/src/app)" && echo $contents
# CMD ["node","server.js"]
ENTRYPOINT ["tail", "-f", "/dev/null"]
Which gives this output from build:
✗ docker build --progress auto --build-arg SEEDSERV_PORT=9999 -f build/api/Dockerfile .
[+] Building 2.1s (14/14) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:16.14.2-alpine 1.9s
=> [internal] load build context 0.0s
=> => transferring context: 122B 0.0s
=> [1/9] FROM docker.io/library/node:16.14.2-alpine#sha256:da7ef512955c906b6fa84a02295a56d0172b2eb57e09286ec7abc02cfbb4c726 0.0s
=> CACHED [2/9] WORKDIR /usr/src/app 0.0s
=> CACHED [3/9] COPY package.json yarn.lock ./ 0.0s
=> CACHED [4/9] RUN yarn 0.0s
=> CACHED [5/9] COPY dist . 0.0s
=> CACHED [6/9] RUN pwd 0.0s
=> CACHED [7/9] RUN echo "output" 0.0s
=> CACHED [8/9] RUN ls -alh 0.0s
=> CACHED [9/9] RUN contents="$(ls -1 /usr/src/app)" && echo $contents 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:d1dd7ac452ecacc803eed2bb1deff654c3296a5576b6f418dbd07c5f2e644f1a 0.0s
Adding --progress plain gives slightly different output but not what I'm looking for, e.g.:
#11 [7/9] RUN echo "output"
#11 sha256:634e07d201926b0f70289515fcf4a7303cac3658aeddebfa9552fc3054ed4ace
#11 CACHED
How can I get a directory listing during build in 20.10.3? I can exec into the running container but that's a lot more work.
If your build is cached, there's no output from the run to show. You need to include --no-cache to run the command again for any output to display, and also include --progress plain to output to the console.
Dockerfile has the following content,
FROM node:16.4.2-alpine3.14
WORKDIR /app
COPY package.json .
COPY . /app
And ran the following build command,
docker build -t app:0.1 .
It took 28.4 seconds and below is the terminal logs,
[+] Building 28.4s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 124B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 53B 0.0s
=> [internal] load metadata for docker.io/library/node:16.4.2-alpine3.14 17.1s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 2.01MB 0.0s
=> [1/4] FROM docker.io/library/node:16.4.2-alpine3.14#sha256:fabfca5e7dcb339097f998d6ef11c53dd80a3f99ed5cecc005e93d0ff6d4bda9 9.9s
=> => resolve docker.io/library/node:16.4.2-alpine3.14#sha256:fabfca5e7dcb339097f998d6ef11c53dd80a3f99ed5cecc005e93d0ff6d4bda9 0.0s
=> => sha256:fabfca5e7dcb339097f998d6ef11c53dd80a3f99ed5cecc005e93d0ff6d4bda9 1.00kB / 1.00kB 0.0s
=> => sha256:75dec02064547a8ec570f2953e8d68a1674ad3f37730160f1570cce077be9ed0 1.16kB / 1.16kB 0.0s
=> => sha256:40cb916373b08a087466d2e72402d0b3a4587fd3e9135169498cf0db4ff42a88 6.53kB / 6.53kB 0.0s
=> => sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b 2.81MB / 2.81MB 0.8s
=> => sha256:c118dce16b0057d713fc98e31606a84e4348fa2c967eaf1bb5fd21ba42825956 35.55MB / 35.55MB 7.1s
=> => sha256:aef8e8137ac43c8199343c96874993063af6584260f22b15e99f735cce5de653 2.35MB / 2.35MB 2.6s
=> => extracting sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b 0.2s
=> => sha256:ad336e0e52b8dfc38c23599663deb060b1ac169d548dec8072ead94712f708be 281B / 281B 2.0s
=> => extracting sha256:c118dce16b0057d713fc98e31606a84e4348fa2c967eaf1bb5fd21ba42825956 2.0s
=> => extracting sha256:aef8e8137ac43c8199343c96874993063af6584260f22b15e99f735cce5de653 0.2s
=> => extracting sha256:ad336e0e52b8dfc38c23599663deb060b1ac169d548dec8072ead94712f708be 0.0s
=> [2/4] WORKDIR /app 0.6s
=> [3/4] COPY package.json . 0.1s
=> [4/4] COPY . /app 0.1s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:91d93eddff55cba6bd8b72144b7320e025de93e9865177ff584c75b94d1bafc1 0.0s
=> => naming to docker.io/library/app:0.1
When I run the same build command again, it is taking 14.6 seconds.
However if I pull the node:16.4.2-alpine3.14 using,
docker pull node:16.4.2-alpine3.14
and then run the build command, then build takes only 0.3 seconds
I think, when we build an image, dependencies also get downloaded and that is why the time taken reduces from 28.4 to 14.6 seconds. But why even 14.6 seconds? It should be as less as 0.3 seconds.
Why is this so? What am I missing?
The following could be the reasons it takes 18.4 seconds:
Docker client takes the entire build context to the docker daemon.The build context is the entire directory the Dockerfile.some files or folders can take up a lot of space e.g node_modules. A remedy to this is to add the file that is not required eg .git, node_module, log files to the .dockerignore file to get Docker to ignore some files.
DNS resolution. Check how long it takes to resolve the docker registry using the dig command.
Enabling the buildkit for your build will help with improving the build time. Setting the DOCKER_BUILDKIT=1 environment variable when invoking the docker build command such as:
DOCKER_BUILDKIT=1 docker build .
please look at this link for more information