I'm getting the same error message as this SO post. However after trying all the suggestions on that post, I'm still unable to resolve my issue, which is described in the following.
I'm using Kaniko to build and push images for later use. I've ensured the image pushing portion of the job works by building a test Dockerfile and --context from a publicly-accessible git repo.
So now I'm trying to build images by using a mounted hostPath directory /root/app/containerization-engine/docker-service as the build context, which you can see indeed exists, along with its Dockerfile, in the following shell output:
[root#ip-172-31-60-18 kaniko-jobs]# ll -d /root/app/containerization-engine/docker-service/
drwxr-xr-x. 8 root root 4096 May 24 17:52 /root/app/containerization-engine/docker-service/
[root#ip-172-31-60-18 kaniko-jobs]#
[root#ip-172-31-60-18 kaniko-jobs]#
[root#ip-172-31-60-18 kaniko-jobs]#
[root#ip-172-31-60-18 kaniko-jobs]# ll -F /root/app/containerization-engine/docker-service/
total 52
drwxr-xr-x. 6 root root 104 May 9 01:50 app/
-rw-r--r--. 1 root root 20376 May 25 12:02 batch_metrics.py
-rw-r--r--. 1 root root 7647 May 25 12:02 batch_predict.py
-rw-r--r--. 1 root root 14 May 25 12:02 dev_requirements.txt
-rw-r--r--. 1 root root 432 May 25 12:02 Dockerfile
-rw-r--r--. 1 root root 136 May 25 12:02 gunicorn_config.py
drwxr-xr-x. 2 root root 19 May 9 01:50 hooks/
drwxr-xr-x. 2 root root 37 May 9 01:50 jenkins/
-rw-r--r--. 1 root root 158 May 25 12:02 manage.py
drwxr-xr-x. 2 root root 37 May 9 01:50 models/
-rw-r--r--. 1 root root 0 May 25 12:02 README.md
-rw-r--r--. 1 root root 247 May 25 12:02 requirements.txt
drwxr-xr-x. 2 root root 94 May 9 01:50 utils/
-rw-r--r--. 1 root root 195 May 25 12:02 wsgi.py
The job manifest containerization-test.yaml that I'm running with kubectl apply -f is defined below:
apiVersion: batch/v1
kind: Job
metadata:
name: kaniko-containerization-test
spec:
template:
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=Dockerfile",
"--context=dir:///docker-service",
"--destination=jethrocao/containerization-test:v0",
"--verbosity=trace"]
volumeMounts:
- name: docker-service-build-context
mountPath: "/docker-service"
volumeMounts:
- name: kaniko-secret
mountPath: "/kaniko/.docker"
readOnly: true
restartPolicy: Never
volumes:
- name: docker-service-build-context
hostPath:
path: "/root/app/containerization-engine/docker-service"
type: Directory
- name: kaniko-secret
secret:
secretName: regcred-ca5e
items:
- key: .dockerconfigjson
path: config.json
optional: false
The job is created successfully, but the pods that are created to run the job keep on erroring out, and inspecting the log from one of these failed attempts, I see:
[root#ip-172-31-60-18 kaniko-jobs]# kubectl logs kaniko-containerization-test-rp8lh | head
DEBU[0000] Getting source context from dir:///docker-service
DEBU[0000] Build context located at /docker-service
Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
Usage:
executor [flags]
executor [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
help Help about any command
To triple confirm that the hostPath directory and the Dockerfile it contains are both accessible when mounted as a volume into a container, I changed the batch job into a deployment object (running a different image not Kaniko), applied that, kubectl exec -it into the running pod, and inspected the mounted path /docker-service, which exists, along with the full contents of the directory. I then wrote to the Dockerfile within just to test write accessibility, and it worked as expected; and the written change persisted outside of the container in the cluster's node too.
I'm really at a loss at what could be the problem, any ideas?
Related
I am trying to run single node Elasticsearch instance on a HPC cluster. To do this, I am converting the Elasticsearch docker container as a singularity container. When I launch the container itself I get the following error:
$ singularity exec --overlay overlay.img elastic.sif /usr/share/elasticsearch/bin/elasticsearch
Could not create auto-configuration directory
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.000s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.server.cli.JvmOption.flagsFinal(JvmOption.java:113)
at org.elasticsearch.server.cli.JvmOption.findFinalOptions(JvmOption.java:80)
at org.elasticsearch.server.cli.MachineDependentHeap.determineHeapSettings(MachineDependentHeap.java:59)
at org.elasticsearch.server.cli.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:132)
at org.elasticsearch.server.cli.JvmOptionsParser.determineJvmOptions(JvmOptionsParser.java:90)
at org.elasticsearch.server.cli.ServerProcess.createProcess(ServerProcess.java:211)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:106)
at org.elasticsearch.server.cli.ServerProcess.start(ServerProcess.java:89)
at org.elasticsearch.server.cli.ServerCli.startServer(ServerCli.java:213)
at org.elasticsearch.server.cli.ServerCli.execute(ServerCli.java:90)
at org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:54)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:85)
at org.elasticsearch.cli.Command.main(Command.java:50)
at org.elasticsearch.launcher.CliToolLauncher.main(CliToolLauncher.java:64)
If I understand correctly, Elasticsearch is trying to create a logfile in /var/log/elasticsearch but does not have the correct permissions. So I created the following recipe to create the folders and set the permission such that any process can write into the log directory. My recipe is the following:
Bootstrap: docker
From: elasticsearch:8.3.1
%files
elasticsearch.yml /usr/share/elasticsearch/config/
%post
mkdir -p /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/log/elasticsearch
chmod -R 777 /var/log/elasticsearch
mkdir -p /var/data/elasticsearch
chown -R elasticsearch:elasticsearch /var/data/elasticsearch
chmod -R 777 /var/data/elasticsearch
The elasticsearch.yml file has the following content:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.type: single-node
ingest.geoip.downloader.enabled: false
After building this recipe the directory /var/log/elasticsearch seems to get created correctly:
$ singularity exec elastic.sif ls -alh /var/log/
total 569K
drwxr-xr-x 4 root root 162 Jul 8 14:43 .
drwxr-xr-x 12 root root 172 Jul 8 14:43 ..
-rw-r--r-- 1 root root 7.7K Jun 29 17:29 alternatives.log
drwxr-xr-x 2 root root 69 Jun 29 17:29 apt
-rw-r--r-- 1 root root 58K May 31 11:43 bootstrap.log
-rw-rw---- 1 root utmp 0 May 31 11:43 btmp
-rw-r--r-- 1 root root 187K Jun 29 17:30 dpkg.log
drwxrwxrwx 2 elasticsearch elasticsearch 3 Jul 8 14:43 elasticsearch
-rw-r--r-- 1 root root 32K Jun 29 17:30 faillog
-rw-rw-r-- 1 root utmp 286K Jun 29 17:30 lastlog
-rw-rw-r-- 1 root utmp 0 May 31 11:43 wtmp
But when I launch the container I get the permission denied error listed above.
What is missing here? What permissions is Elasticsearch expecting?
The following workaround seems to be working for me now:
When launching the singularity container, the elasticsearch process is executed inside the container with the same UID as my own UID (the user that is launching the singularity container with singularity exec). The elasticsearch container is configured to run elasticsearch with the a separate user elasticsearch that exists inside the container. The issue is that singularity (unlike docker) will run every process inside the container with my own UID and not the elasticsearch UID, resulting in the error above.
To work around this, I created a base ubuntu singularity image and then installed elasticsearch into the container following these installation instructions (https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html). Because the installation was performed with my system user and UID, the entire elasticsearch installation belongs to my system user and not a separate elasticsearch user. Then I can launch the elasticsearch service inside the container.
I am trying to deploy jenkins to IBM Cloud Kubernetes Service using persistent volume. Jenkins container stucks at Beginning extraction from war file.
I have tried without persistence and it is deployed as expected.
persistence:
storageClass: ibmc-file-bronze
serviceAccount:
create: false
name: jenkins
annotations: {}
controller:
customInitContainers:
- name: "volume-mount-permission"
image: "busybox"
command: ["/bin/sh"]
args:
- -c
- >-
chgrp -R 1000 /var/jenkins_home &&
chown -R 0 /var/jenkins_home &&
chmod -R g+rwx /var/jenkins_home
volumeMounts:
- name: "jenkins-home"
mountPath: "/var/jenkins_home"
securityContext:
runAsUser: 0
serviceType: NodePort
This is my values.yaml file. I configured a custom init container for folder permissions. Without this, init container fails with permission issue. With volume-mount-permissions init container, all other containers terminate with success.
The permission of jenkins_home folder is below.
jenkins#jenkins-pv-0:/$ ls -al /var/jenkins_home/
total 44
drwxrwxr-x 6 nobody jenkins 4096 Nov 26 15:02 .
drwxr-xr-x 1 root root 4096 Nov 26 15:01 ..
drwxr-xr-x 3 jenkins jenkins 4096 Nov 26 14:50 .cache
drwxrwsrwx 2 root jenkins 4096 Nov 26 14:50 casc_configs
-rw-r--r-- 1 jenkins jenkins 3106 Nov 26 15:01 copy_reference_file.log
-rw-r--r-- 1 jenkins jenkins 8 Nov 26 14:50 jenkins.install.InstallUtil.lastExecVersion
-rw-r--r-- 1 jenkins jenkins 8 Nov 26 14:50 jenkins.install.UpgradeWizard.state
drwxr-xr-x 2 jenkins jenkins 16384 Nov 26 14:51 plugins
-rw-r--r-- 1 jenkins jenkins 78 Nov 26 14:50 plugins.txt
drwxr-xr-x 6 jenkins jenkins 4096 Nov 26 15:02 war
Logs of Jenkins container is as below:
2020-11-26T15:01:49.195430822Z Running from: /usr/share/jenkins/jenkins.war
2020-11-26T15:01:49.199519383Z webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
2020-11-26T15:01:49.404752124Z 2020-11-26 15:01:49.388+0000 [id=1] INFO org.eclipse.jetty.util.log.Log#initialized: Logging initialized #522ms to org.eclipse.jetty.util.log.JavaUtilLog
2020-11-26T15:01:49.585199893Z 2020-11-26 15:01:49.584+0000 [id=1] INFO winstone.Logger#logInternal: Beginning extraction from war file
I followed the installation guide from Jenkins official Kubernetes installation.
The solution was installing the IBM Cloud Block Storage plug-in.
On IBM Cloud Kubernetes Service, I think, Jenkins cannot be installed on file-storage
I'm pretty sure that the last update of Docker for Windows has broken something.
Here is the thing. I have a custom image named toolbox, built from alpine, with a bounch of script inside it (bind-mount the local folder ./mnt/):
version: '3'
services:
# ...
toolbox:
build:
context: ./.docker/toolbox
restart: always
volumes:
- ./mnt/etc/periodic/daily:/etc/periodic/daily
Files have the right permissions:
/ # ls -la /etc/periodic/daily/
total 4
drwxrwxrwx 1 root root 4096 Mar 16 17:49 .
drwxr-xr-x 7 root root 4096 Jan 16 22:52 ..
-rwxr-xr-x 1 root root 332 Mar 1 23:57 backup-databases
-rwxr-xr-x 1 root root 61 Mar 1 23:51 cleanup-databases-backups
When I try to execute backup-databases I get the following error:
/ # /etc/periodic/daily/backup-databases /bin/sh:
/etc/periodic/daily/backup-databases: Operation not permitted
The strange thing is, if I create a script (from inside the container) it works:
echo "echo Hello" > /etc/periodic/daily/test
chmod +x /etc/periodic/daily/test
/etc/periodic/daily/test
This question is a minimal failing version of this other one:
How to get contents generated by a docker container on the local fileystem
I have the following files:
./test
-rw-r--r-- 1 miqueladell staff 114 Jan 21 15:24 Dockerfile
-rw-r--r-- 1 miqueladell staff 90 Jan 21 15:23 docker-compose.yml
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 html
./test/html:
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
DockerFile
FROM php:7.0.2-apache
RUN touch /var/www/html/file_generated_inside_the_container
VOLUME /var/www/html/
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
After running a container built from the image defined in the Dockerfile what I want is having:
./html
-- file_from_local_filesystem
-- file_generated_inside_the_container
Instead of this I get the following:
build the image
$ docker build --no-cache -t test .
Sending build context to Docker daemon 4.096 kB
Step 1 : FROM php:7.0.2-apache
---> 2f16964f48ba
Step 2 : RUN touch /var/www/html/file_generated_inside_the_container
---> Running in b957cc9d7345
---> 5579d3a2d3b2
Removing intermediate container b957cc9d7345
Step 3 : VOLUME /var/www/html/
---> Running in 6722ddba76cc
---> 4408967d2a98
Removing intermediate container 6722ddba76cc
Successfully built 4408967d2a98
run a container with previous image
$ docker-compose up -d
Creating test_test_1
list files on the local machine filesystem
$ ls -al html
total 0
drwxr-xr-x 3 miqueladell staff 102 Jan 21 15:25 .
drwxr-xr-x 5 miqueladell staff 170 Jan 21 14:20 ..
-rw-r--r-- 1 miqueladell staff 0 Jan 21 15:22 file_from_local_filesystem
list files from the container
$ docker exec -i -t test_test_1 ls -alR /var/www/html
/var/www/html:
total 4
drwxr-xr-x 1 1000 staff 102 Jan 21 14:25 .
drwxr-xr-x 4 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 staff 0 Jan 21 14:22 file_from_local_filesystem
The volume from the local filesystem gets mounted on the container file system replacing the contents of it.
This is contrary at what I understand in the section "Permissions and Ownership" of this guide Understanding volumes
How could I get the desired output?
Thanks
EDIT: As is said in the accepted answer I did not understand volumes when asking the question. Volumes, as mountponint, replace the container content with the local filesystem that is mounted.
The solution I needed was to use ENTRYPOINT to run the necessary commands to initialize the contents of the mounted volume once the container is running.
The code that originated the question can be seen working here:
https://github.com/MiquelAdell/composed_wordpress/tree/1.0.0
This is from the guide you've pointed to
This won’t happen if you specify a host directory for the volume
Volumes you share from other containers or host filesystem replace directories from container.
If you need to add some files to volume, you should do it after you start container. You can do an entrypoint for example which does touch and then runs your main process.
Yep, pretty sure it should be the full path:
docker-compose.yml
test:
image: test
volumes:
- ./html:/var/www/html/
./html should be /path/to/html
Edit
Output after changing to full path and running test.sh:
$ docker exec -ti dockervolumetest_test_1 bash
root#c0bd7a722b63:/var/www/html# ls -la
total 8
drwxr-xr-x 2 1000 adm 4096 Jan 21 15:19 .
drwxr-xr-x 3 root root 4096 Jan 7 18:05 ..
-rw-r--r-- 1 1000 adm 0 Jan 21 15:19 file_from_local_filesystem
Edit 2
Sorry, I misunderstood the entire premise of the question :)
So you're trying to get file_generated_inside_the_container (which is created inside your docker image only) mounted to some location on your host machine - like a "reverse mount".
This isn't possible to do with any docker commands, but if all you're after is access to your VOLUMEs files on your host, you can find the files in the docker root directory (normally /var/lib/docker). To find the exact location of the files, you can use docker inspect [container_id], or in the latest versions use the docker API.
See cpuguy's answer in this github issue: https://github.com/docker/docker/issues/12853#issuecomment-123953258 for more details.
Is it possible to mount a volume from a container into another container on a different path? E.g.
contA exposes a volumen /source
mounting it in another container docker run --volumes-from contA -v /source/somedir:/etc/otherdir
I'm trying to use this with docker-compose and jwilder/nginx-proxy:
docker-compose.yml
myapp:
build: .
command: ./run.sh
volumes:
- /source
nginx:
image: jwilder/nginx-proxy
volumes_from:
- myapp
volumes:
- /source/vhost.d:/etc/nginx/vhost.d:ro
- /var/run/docker.sock:/tmp/docker.sock
links:
- myapp:myapp
If I'm trying so, I can't see my files at /etc/nginx/vhost.d:
$ docker-compose run nginx bash
root#f200c1c476c7:/app# ls -l
total 32
-rw-r--r-- 1 root root 1076 Apr 9 22:10 Dockerfile
-rw-r--r-- 1 root root 1079 Apr 9 22:10 LICENSE
-rw-r--r-- 1 root root 129 Apr 9 22:10 Procfile
-rw-r--r-- 1 root root 8385 Apr 9 22:10 README.md
-rw-r--r-- 1 root root 5493 Apr 9 22:10 nginx.tmpl
root#f200c1c476c7:/app# ls -l /etc/nginx/vhost.d
total 0
root#f200c1c476c7:/app# ls -l /source/nginx/
total 8
-rw-r--r-- 1 1000 staff 957 Apr 24 07:17 dockerhost.me
It doesn't seem possible, considering the syntax - v /host/path:/container/path is reserved for mounting a path from host (and not from another container)
That leaves you with the option of adding to your second container a symbolic link from /etc/otherdir to /source/somedir (which will exist because of the --volumes-from contA directive)