I built the docker image locally in mac system and then when I'm trying to run the docker getting the below error. I tried both below options.
# Build OPA Service directory, load policies and data, install and run OPA daemon
FROM alpine:latest
RUN apk --no-cache add curl
ADD $PWD/data /data
VOLUME /data
RUN curl -L -o opa https://openpolicyagent.org/downloads/v0.46.1/opa_darwin_amd64
RUN chmod 755 ./opa
EXPOSE 8181
CMD ./opa run -s ./data --skip-version-check
Docker Build Command
docker build -t opaservice .
Docker Run command I'm executing
docker run opaservice
Error Message logged
./opa: line 0: syntax error: unterminated quoted string
tried below changing from CMD to ENTRYPOINT but no luck
ENTRYPOINT ["./opa" "run" "-s" "./data" "--skip-version-check"]
I tried to recreate, It worked for me.
root#swarm01:/myworkspace/docker# cat Dockerfile
FROM alpine:latest
RUN apk --no-cache add curl
ADD $PWD/data /data
VOLUME /data
RUN curl -L -o opa https://openpolicyagent.org/downloads/v0.46.1/opa_darwin_amd64
RUN chmod 755 ./opa
EXPOSE 8181
#CMD ./opa run -s ./data --skip-version-check
ENTRYPOINT ["./opa" "run" "-s" "./data" "--skip-version-check"]
I create a empty data directory.
root#swarm01:/myworkspace/docker# mkdir data
root#swarm01:/myworkspace/docker# docker build -t test .
Sending build context to Docker daemon 2.56kB
Step 1/8 : FROM alpine:latest
---> bfe296a52501
Step 2/8 : RUN apk --no-cache add curl
---> Using cache
---> b535a243661d
Step 3/8 : ADD $PWD/data /data
---> 4be29d67af7c
Step 4/8 : VOLUME /data
---> Running in 508d5eb2331d
Removing intermediate container 508d5eb2331d
---> 3e4399d19162
Step 5/8 : RUN curl -L -o opa https://openpolicyagent.org/downloads/v0.46.1/opa_darwin_amd64
---> Running in bb11fb2a5316
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 81 100 81 0 0 414 0 --:--:-- --:--:-- --:--:-- 415
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
100 57.8M 100 57.8M 0 0 6976k 0 0:00:08 0:00:08 --:--:-- 9.9M
Removing intermediate container bb11fb2a5316
---> b87791a27ae9
Step 6/8 : RUN chmod 755 ./opa
---> Running in f69cfcc4d253
Removing intermediate container f69cfcc4d253
---> a6e4cd0e6e34
Step 7/8 : EXPOSE 8181
---> Running in e48a8bf4aced
Removing intermediate container e48a8bf4aced
---> db4b8ba5fb95
Step 8/8 : ENTRYPOINT ["./opa" "run" "-s" "./data" "--skip-version-check"]
---> Running in e6cf0d68f2c8
Removing intermediate container e6cf0d68f2c8
---> b0f0ba930f81
Successfully built b0f0ba930f81
Successfully tagged test:latest
Related
I have a working build that I am trying to dockerise. This is my dockerfile, I added the "pwd" and "ls -l" to see if the build is copied correctly and it is. However when I try to run "docker run " I get the error "No such file or directory. Please let me know what I might be doing wrong. Appreciate your help.
Dockerfile
FROM <base image>
WORKDIR /app
RUN echo 'List all files'
RUN pwd
RUN ls -l
COPY src/mysolution-linux-amd64 /app/
RUN ls -l
ENTRYPOINT ["/app/mysolution-linux-amd64"]
I have tried ENTRYPOINT with both "./mysolution-linux-amd64" and "/app/mysolution-linux-amd64" but both fail when I run.
Output during Docker build
Sending build context to Docker daemon 1.014GB
Step 1/8 : FROM <base image>
---> 3ed27f7c19ce
Step 2/8 : WORKDIR /app
---> Running in 1b273ccccd22
Removing intermediate container 1b273ccccd22
---> 92560bbb67eb
Step 3/8 : RUN echo 'List all files'
---> Running in faddc1b6adfd
List all files
Removing intermediate container faddc1b6adfd
---> b7b2f657012e
Step 4/8 : RUN pwd
---> Running in 8354a5a476ac
/app
Removing intermediate container 8354a5a476ac
---> 204a625b730b
Step 5/8 : RUN ls -l
---> Running in 0d45cf1339d9
total 0
Removing intermediate container 0d45cf1339d9
---> 6df6451aef44
Step 6/8 : COPY src/mysolution-linux-amd64 /app/
---> 44ac2f066340
Step 7/8 : RUN ls -l
---> Running in d17ec6b0c7af
total 11460
-rwxrwxr-x 1 root root 11734780 Nov 26 04:25 mysolution-linux-amd64
Removing intermediate container d17ec6b0c7af
---> 56a879ef9440
Step 8/8 : ENTRYPOINT ["/app/mysolution-linux-amd64"]
---> Running in 33bea73f14dc
Removing intermediate container 33bea73f14dc
---> ef794fe310bc
Successfully built ef794fe310bc
Successfully tagged newtech/mysolution:latest
I'm building a Docker image for node:10.20.1-buster-slim as base image and the Makefile runs fine, it creates the image and runs it.
However when using this Docker build in CI pipelines in Jenkins which is installed on a unique machine. Sometimes, the same image (node:10.20.1-buster-slim) is built from two processes simultaneously (e.g two Jenkins users run two Acceptance tests at the same time) so sometimes I end getting this kind of errors when trying to run the the container, after building the image:
failed to get digest sha256:3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: open /var/lib/docker/image/overlay2/imagedb/content/sha256/3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: no such file or directory
Makefile targets:
define down
docker-compose down
endef
define prune
rm -rf build/dist/*.js.map build/dist/*.js build/dist/*.css
docker container prune -f
docker volume prune -f
docker image prune -f
endef
build/dist/main.js: $(SRC)
$(call down)
$(call prune)
chmod -R a+rw build
docker build . --target=prod -t web_app_release
docker run --mount type=bind,source="$(PWD)"/web_app/build,target=/opt/web_app/build --rm web_app_release npm run build-prod
docker run --mount type=bind,source="$(PWD)"/web_app/build,target=/opt/web_app/build --rm web_app_release npm run build-css
Error (reproducible sometimes):
Compiling web_app ...
docker-compose down
Removing network webapp_default
Network webapp_default not found.
rm -rf build/dist/*.js.map build/dist/*.js build/dist/*.css
docker container prune -f
Total reclaimed space: 0B
docker volume prune -f
Total reclaimed space: 0B
docker image prune -f
Total reclaimed space: 0B
chmod -R a+rw build
docker build . --target=prod -t web_app_release
Sending build context to Docker daemon 122.5MB
Step 1/10 : FROM node:10.20.1-buster-slim AS base
---> cfd03db45bdc
Step 2/10 : ENV WORKPATH /opt/web_app/
---> Using cache
---> e1e61766fc45
Step 3/10 : WORKDIR $WORKPATH
---> Using cache
---> e776ef1390bd
Step 4/10 : RUN chown -R node $WORKPATH
---> Using cache
---> 1a5bb0d392a3
Step 5/10 : USER node
---> Using cache
---> bc682380a352
Step 6/10 : COPY --chown=node:node package* $WORKPATH
---> Using cache
---> 71b1848cad1e
Step 7/10 : RUN npm install --only=prod --no-progress
---> Using cache
---> de4ef62ab81e
Step 8/10 : CMD ["npm", "start"]
---> Using cache
---> 319f69778fb6
Step 9/10 : FROM base AS prod
---> 319f69778fb6
Step 10/10 : COPY --chown=node:node . $WORKPATH
---> 3d3f18764dcb
Successfully built 3d3f18764dcb
failed to get digest sha256:3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: open /var/lib/docker/image/overlay2/imagedb/content/sha256/3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: no such file or directory
Makefile:91: recipe for target 'build/dist/main.js' failed
make[1]: *** [build/dist/main.js] Error 1
Makefile:94: recipe for target 'web_app' failed
make: *** [web_app] Error 2
[Pipeline] error
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Build failed
Finished: FAILURE
How could I make it so that users could run tests concurrently without the build of the same image overlapping each other?
Could I use some tool included in docker or check manually with bash scripts or in the Makefile?
I have what I believe is a pretty simple setup.
I build a binary file outside of docker and then try to add it using this Dockerfile
FROM alpine
COPY apps/dist/apps /bin/
RUN chmod +x /bin/apps
RUN ls -al /bin | grep apps
CMD /bin/apps
And I think this should work.
The binary on its own seems to work on my host machine and I don't understand why it wouldn't on the docker image.
Anyways, the output I get is this:
docker build -t apps -f app.Dockerfile . && docker run apps
Sending build context to Docker daemon 287.5MB
Step 1/5 : alpine
---> d05cf6536f67
Step 2/5 : COPY apps/dist/apps /bin/
---> Using cache
---> c54d6d57154e
Step 3/5 : RUN chmod +x /bin/apps
---> Using cache
---> aa7e6adb0981
Step 4/5 : RUN ls -al /bin | grep apps
---> Running in 868c5e235d68
-rwxr-xr-x 1 root root 68395166 Dec 20 13:35 apps
Removing intermediate container 868c5e235d68
---> f052c06269b0
Step 5/5 : CMD /bin/apps
---> Running in 056fd02733e1
Removing intermediate container 056fd02733e1
---> 331600154cbe
Successfully built 331600154cbe
Successfully tagged apps:latest
/bin/sh: /bin/apps: not found
does this make sense, and am I just missing something obvious?
Your binary likely has dynamic links to libraries that don't exist inside the image filesystem. You can check those dynamic links with the ldd apps/dist/apps command.
I am trying to rebuild bogem/ftp to make the container run as non root.
I created my own repo where you can find all the files.
I build it locally:
docker build -t bram_ftp:v0.4 .
Sending build context to Docker daemon 8.704kB
Step 1/17 : FROM ubuntu:latest
---> f643c72bc252
Step 2/17 : RUN apt-get update && apt-get install -y --no-install-recommends vsftpd db-util sudo && apt-get clean
---> Using cache
---> 8ab5e8a0d3d7
Step 3/17 : RUN useradd -m ftpuser
---> Using cache
---> 179c738d8a8b
Step 4/17 : ENV FTP_USER admin
---> Using cache
---> 3f55c42bccda
Step 5/17 : ENV FTP_PASS admin
---> Using cache
---> a44874a4d54e
Step 6/17 : ENV PASV_ADDRESS=127.0.0.1
---> Using cache
---> 824c15835a7f
Step 7/17 : COPY vsftpd_virtual /etc/pam.d/
---> Using cache
---> 5045135bb1ca
Step 8/17 : COPY run-vsftpd.sh /usr/sbin/
---> Using cache
---> 30bd2be7d610
Step 9/17 : COPY config-vsftpd.sh /usr/sbin/
---> Using cache
---> 8347833c2f63
Step 10/17 : RUN /usr/sbin/config-vsftpd.sh
---> Using cache
---> 58237fe9a8be
Step 11/17 : COPY vsftpd.conf /etc/vsftpd/
---> Using cache
---> 92c9cbc75356
Step 12/17 : RUN chown -R ftpuser:ftpuser /etc/vsftpd/ && chown ftpuser:ftpuser /usr/sbin/*-vsftpd.sh && chmod +x /usr/sbin/*-vsftpd.sh && mkdir -p /var/run/vsftpd/empty
---> Running in 91f03e3198df
Removing intermediate container 91f03e3198df
---> 94cfaf7209a9
Step 13/17 : VOLUME /home/ftpuser/vsftpd
---> Running in cfdf44372c17
Removing intermediate container cfdf44372c17
---> 5d7416bd2844
Step 14/17 : VOLUME /var/log/vsftpd
---> Running in c2b5121adb49
Removing intermediate container c2b5121adb49
---> 620cc085a235
Step 15/17 : EXPOSE 20 21
---> Running in f12d22af36cc
Removing intermediate container f12d22af36cc
---> 1dd7698c18b3
Step 16/17 : USER ftpuser
---> Running in d7a2cdcc3aa1
Removing intermediate container d7a2cdcc3aa1
---> 3a88a4a89ac8
Step 17/17 : CMD ["/usr/sbin/run-vsftpd.sh"]
---> Running in 86f5dec18f71
Removing intermediate container 86f5dec18f71
---> 50fdae730864
Successfully built 50fdae730864
Successfully tagged bram_ftp:v0.4
When I run it locally as described in the README then the container just keeps restarting and I do not see any logs/errors.
When I run the container interactively (so -it instead of -d) instead of as daemon I get this error:
docker run -it -v /tmp/vsftpd:/home/ftpuser/vsftpd \
-p 20:20 -p 21:21 -p 47400-47470:47400-47470 \
-e FTP_USER=admin \
-e FTP_PASS=admin \
-e PASV_ADDRESS=127.0.0.1 \
--name ftp \
--restart=always \bram_ftp:v0.4
500 OOPS: config file not owned by correct user, or not a file
But when I check with what user the container is running and the vsftpd.conf permissions are everything seems to be fine:
docker run bram_ftp:v0.4 id
uid=1000(ftpuser) gid=1000(ftpuser) groups=1000(ftpuser)
docker run bram_ftp:v0.4 ls -la /etc/vsftpd
total 28
drwxr-xr-x 1 ftpuser ftpuser 4096 Dec 31 13:12 .
drwxr-xr-x 1 root root 4096 Dec 31 14:28 ..
-rw-r--r-- 1 ftpuser ftpuser 12288 Dec 31 13:12 virtual_users.db
-rw-r--r-- 1 ftpuser ftpuser 12 Dec 31 13:12 virtual_users.txt
-rw-r--r-- 1 ftpuser ftpuser 1734 Dec 31 13:09 vsftpd.conf
When I run the container like below I can get in the container wothout issues:
docker run -it bram_ftp:v0.4 bash
ftpuser#5358b2368c55:/$
I then start vsftpd manually:
docker run -it bram_ftp:v0.4 bash
ftpuser#5358b2368c55:/$ vsftpd /etc/vsftpd/vsftpd.conf
If I then check what processes are running in the container I see this:
docker exec 5358b2368c55 ps -ef
UID PID PPID C STIME TTY TIME CMD
ftpuser 1 0 0 14:31 pts/0 00:00:00 bash
ftpuser 10 1 0 14:32 pts/0 00:00:00 vsftpd /etc/vsftpd/vsftpd.conf
ftpuser 11 0 0 14:33 ? 00:00:00 ps -ef
I don't have any experience with vsftpd so I have no clue what I am doing wrong here. Hope someone can help me out.
I am new in Docker, I am trying to create Flink docker image. Docker image is showing created successfully. But I have a doubt, I can not see Flink binaries in /opt path as mentioned in Dockerfile.
How do I know my Flink Docker image is created successfully.
Screenshot from console, few set of commands are highlighted in red color, means is it OK ?
Status showing Flink Docker image is created:
Please help me. Thank you....
Full Log:
sudo /home/develk/cntx_eng/build.sh \
> --job-artifacts /home/develk/cntx_eng/FlinkContextEnginePoc-0.0.1-SNAPSHOT.jar \
> --from-archive /home/develk/cntx_eng/flink-1.4.0-bin-hadoop24-scala_2.11.tgz \
> --image-name contxeng-flink-poc:1.4.0
--job-artifacts
/home/develk/cntx_eng/FlinkContextEnginePoc-0.0.1-SNAPSHOT.jar
--from-archive
JOB_ARTIFACTS_PATH : /home/develk/cntx_eng/FlinkContextEnginePoc-0.0.1-SNAPSHOT.jar
FROM_ARCHIVE : /home/develk/cntx_eng/flink-1.4.0-bin-hadoop24-scala_2.11.tgz
HADOOP_VERSION :
FLINK_VERSION :
-------------------------Arg Values---------------------------
FLINK_DIST : _TMP_/flink.tgz
JOB_ARTIFACTS_TARGET : _TMP_/artifacts
SHADED_HADOOP :
IMAGE_NAME : contxeng-flink-poc:1.4.0
--------------------------------------------------------------
Sending build context to Docker daemon 606.6MB
Step 1/23 : FROM openjdk:8-jre-alpine
---> f7a292bbb70c
Step 2/23 : RUN apk add --no-cache bash snappy libc6-compat
---> Using cache
---> 9e84497f3616
Step 3/23 : ENV FLINK_INSTALL_PATH=/opt
---> Using cache
---> 87bc358ccf00
Step 4/23 : ENV FLINK_HOME $FLINK_INSTALL_PATH/flink
---> Using cache
---> 712ba8d54555
Step 5/23 : ENV FLINK_LIB_DIR $FLINK_HOME/lib
---> Using cache
---> 80e7b085252e
Step 6/23 : ENV FLINK_PLUGINS_DIR $FLINK_HOME/plugins
---> Using cache
---> 7d39101e47d3
Step 7/23 : ENV FLINK_OPT_DIR $FLINK_HOME/opt
---> Using cache
---> 9bff7fc7145d
Step 8/23 : ENV FLINK_JOB_ARTIFACTS_DIR $FLINK_INSTALL_PATH/artifacts
---> Using cache
---> b0c01f3aab84
Step 9/23 : ENV FLINK_USR_LIB_DIR $FLINK_HOME/usrlib
---> Using cache
---> f4236bc26cab
Step 10/23 : ENV PATH $PATH:$FLINK_HOME/bin
---> Using cache
---> 2cb7cd442b6f
Step 11/23 : ARG flink_dist=NOT_SET
---> Using cache
---> 1a6fc691baa2
Step 12/23 : ARG job_artifacts=NOT_SET
---> Using cache
---> e11400e03120
Step 13/23 : ARG python_version=NOT_SET
---> Using cache
---> 313089fd991e
Step 14/23 : ARG hadoop_jar=NOT_SET*
---> Using cache
---> ccbef4dfa806
Step 15/23 : RUN if [ "$python_version" = "2" ]; then apk add --no-cache python; elif [ "$python_version" = "3" ]; then apk add --no-cache python3 && ln -s /usr/bin/python3 /usr/bin/python; fi
---> Using cache
---> 7e6dca36dad4
Step 16/23 : ADD $flink_dist $hadoop_jar $FLINK_INSTALL_PATH/
---> 5afb7a8e5414
Step 17/23 : ADD $job_artifacts/* $FLINK_JOB_ARTIFACTS_DIR/
---> c2789d3d80b3
Step 18/23 : RUN set -x && ln -s $FLINK_INSTALL_PATH/flink-[0-9]* $FLINK_HOME && ln -s $FLINK_JOB_ARTIFACTS_DIR $FLINK_USR_LIB_DIR && if [ -n "$python_version" ]; then ln -s $FLINK_OPT_DIR/flink-python*.jar $FLINK_LIB_DIR; fi && if [ -f ${FLINK_INSTALL_PATH}/flink-shaded-hadoop* ]; then ln -s ${FLINK_INSTALL_PATH}/flink-shaded-hadoop* $FLINK_LIB_DIR; fi && addgroup -S flink && adduser -D -S -H -G flink -h $FLINK_HOME flink && chown -R flink:flink ${FLINK_INSTALL_PATH}/flink-* && chown -R flink:flink ${FLINK_JOB_ARTIFACTS_DIR}/ && chown -h flink:flink $FLINK_HOME
---> Running in c4cb70216f08
+ ln -s /opt/flink-1.4.0 /opt/flink-1.4.0-bin-hadoop24-scala_2.11.tgz /opt/flink
+ ln -s /opt/artifacts /opt/flink/usrlib
+ '[' -n ]
+ '[' -f /opt/flink-shaded-hadoop-2-uber-2.4.1-8.0.jar ]
+ ln -s /opt/flink-shaded-hadoop-2-uber-2.4.1-8.0.jar /opt/flink/lib
+ addgroup -S flink
+ adduser -D -S -H -G flink -h /opt/flink flink
+ chown -R flink:flink /opt/flink-1.4.0 /opt/flink-1.4.0-bin-hadoop24-scala_2.11.tgz /opt/flink-shaded-hadoop-2-uber-2.4.1-8.0.jar
+ chown -R flink:flink /opt/artifacts/
+ chown -h flink:flink /opt/flink
Removing intermediate container c4cb70216f08
---> 459b1156294b
Step 19/23 : COPY docker-entrypoint.sh /
---> d4ae4be34415
Step 20/23 : USER flink
---> Running in 95a2c9234cd5
Removing intermediate container 95a2c9234cd5
---> ebdc913c7dd9
Step 21/23 : EXPOSE 8081 6123
---> Running in f6fad553a1d7
Removing intermediate container f6fad553a1d7
---> 51e6c57d2bde
Step 22/23 : ENTRYPOINT ["/docker-entrypoint.sh"]
---> Running in 09e7c0759fb6
Removing intermediate container 09e7c0759fb6
---> 99cdeb095b8f
Step 23/23 : CMD ["--help"]
---> Running in f985bd546dcf
Removing intermediate container f985bd546dcf
---> 039086df61e6
Successfully built 039086df61e6
Successfully tagged contxeng-flink-poc:1.4.0
Seems like everything went well.
To see which containers are running:
docker ps
contxeng-flink-poc:1.4.0, should be in there.
If you actually want to interact with Flink you should expose some ports.
The Flink dashboard for example runs on port 8081 of your container.
You can also get an interactive bash shell into your container by running the following command (if your container has bash installed):
docker exec -it contxeng-flink-poc:1.4.0 bash
That's where you will find your Flink binaries.
If the container is not running, check if it was built:
docker images
If that's the case, run it:
docker run -d contxeng-flink-poc:1.4.0