How to create a LXC container without rootfs - lxc

I want to create a container without "rootfs" in lxc 1.0.5 and Ubuntu 14.04.
I did it before in previous versions of lxc. In previous versions if we use lxc-create without "-t" option, it will create a container without "rootfs".
so I try:
lxc-create -n foo
and i got this error:
lxc_container: Error creating container foo
I read new lxc-create manpage. The new manpage say:
-t template
'template' is the short name of an existing 'lxc-template'
script that is called by lxc-create, eg. busybox, debian,
fedora, ubuntu or sshd. Refer to the examples in
/usr/local/share/lxc/templates for details of the expected
script structure. Alternatively, the full path to an
executable template script can also be passed as a parameter.
"none" can be used to force lxc-create to skip rootfs
creation.
I try it:
lxc-create -n foo -t none
and i get error again:
lxc_container: No such file or directory - bad template: none
lxc_container: bad template: none
lxc_container: Error creating container foo
What am i doing wrong?

Don't you hate it when reality changes, but the documentation doesn't?
Try using -t /bin/true

Related

I am trying to access a file and I am getting the `docker: invalid reference format: repository name must be lowercase`. error. Any adivce?

I am completely new to linux and docker so please be patient and will greatly apreciate an easy to understand anwser. I am following this guide: https://degauss.org/using_degauss.html and I am at the section where it states Using the DeGAUSS Geocoder. I have set my working directory and I am trying to run docker run --rm -v $PWD:/tmp degauss/geocoder:3.2.1 filtered_file.csv(changed the name for this example as well as the version of geocoder). However when I type that into ubuntu linux subsystem 22.04.1, I get the following error: docker: invalid reference format: repository name must be lowercase. I am not sure what this means. I changed my working directory using cd /mnt/c/Users/Name/Desktop/"FOLDER ONE"/"Folder 0002"/"Here"/. What should I do to fix this issue?
(pwd shows me that the working directory is /mnt/c/Users/Name/Desktop/FOLDER ONE/Folder 0002/Here/
Thanks in advance for your help.
I am expecting the geocoder to run, I have docker open in the background. All I have been able to do is type in docker run --rm -v $PWD:/tmp degauss/geocoder:3.2.1 filtered_file.csv and it is not wokring as noted with the error docker: invalid reference format: repository name must be lowercase.The latest version of geocoder is 3.2.1
You need to put the variable reference $PWD in double quotes. This is generally good practice when using the Unix Bourne shell and I'd recommend always doing it.
docker run --rm -v "$PWD:/tmp" ...
# ^ ^
What's happening here is that the shell first expands the variable reference, then splits the command into words. So you get
docker run --rm -v /mnt/.../FOLDER ONE/Folder 0002/Here/:/tmp ...
docker run --rm \
-v /mnt/.../FOLDER \ # create anonymous volume on this container directory
ONE/Folder \ # image name
0002/Here/:/tmp ... # main container command
The double quotes avoid the word splitting, and you very rarely actually want it.

Jenkins Docker plugin volume/mount what syntax to use

I have a linux vm on which I installed docker. I have several docker containers with the different programs I have to use. Here's my architecture:
Everything is working fine except for the red box.
What I am trying to do is to dynamically provide a jenkins docker-in-docker agent with the cloud functionality in order to build my docker images and push them to the docker registry I set up.
I have been looking for documentation to create a docker in docker container and I found this:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
This article states that in order to avoid problems with my main docker installation I have to create a volume:
-v /var/run/docker.sock:/var/run/docker.sock
I tested my image locally and I have no problem to run
docker run -d -v --name test /var/run/docker.sock:/var/run/docker.sock
docker exec -it test /bin/bash
docker run hello-world
The container is using the linux vm docker installation to build and run the docker images so everything is fine.
However, I face problems when it comes to the jenkins docker cloud configuration.
From what I gather, since the #826 build, the docker jenkins plugin has change its syntax for volumes.
This is the configuration I tried:
And the error message I have when trying to launch the agent:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"create
/var/run/docker.sock: \"/var/run/docker.sock\" includes invalid characters for a local
volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a
host directory, use absolute path"}
I also tried that configuration:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"invalid mount config for type \"volume\": invalid mount path: './var/run/docker.sock' mount path must be absolute"}
I do not get what that means as on my linux vm the docker.sock absolute path is /var/run/docker.sock, and it is the same path inside the docker in docker I ran locally...
I tried to check the source code to find what I did wrong but it's unclear what the code is doing for me (https://github.com/jenkinsci/docker-plugin/blob/master/src/main/java/com/nirima/jenkins/plugins/docker/DockerTemplateBase.java, from row 884 onward), I also tried with backslashes, etc. Nothing worked.
Has anyone any idea what is the expected syntax in that configuration panel for setting up a simple volume?
Change the configuration to this:
type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock
it is not a volume, it is a bind type.
This worked for me
type=bind,source=/sys/fs/cgroup,target=/sys/fs/cgroup,readonly

Docker on a server: --mount type=bind or -v?

I've been trying to run a nextflow pipeline with a Docker image I've created on a server. I've tested this pipeline on my local client and it works fine but trying to run it on a server (ArchLinux, docker version 18.09.6) gives me many different errors. The problem is that the pipeline requires a huge database (NCBI:nt ~120GB) as an "input" (just read, not modify it). On the local client, I've used the temp flag for nextflow, which is equivalent to --mount type=volume,src=<src_path>,target=/tmp flag. This works perfectly on the local client. Once I've uploaded it to the server, I get different problems. I've been accessing the server using ssh (Window'sPowerShell and wsl2). I've tried using the following options:
Using --mount type=bind,src=<src_path>,target=/output:
I get the following error:
docker: Error response from daemon: invalid mount config for type "bind": bind source path does not exist: <src_path>/. The same occurs if many different flags (e.g. readonly) or different propagation forms are used.
Using -v <src_path>:/output: A different error is given:
docker: Error response from daemon: error while creating mount source path '<src_path>': mkdir /share/library: permission denied.. I find this error quite unusual since my user does has the permissions to create files and directories in the src-path. Is there any way of forcing docker to use the permissions of my user?
Is --mount or -v even the right way of accessing this database from within the container? Any help or idea is always welcome since nothing I've found seems to bring me forward...
EDIT:
Rather than a "nextflow" question, it is more of a docker question, since running docker run <any_option_mentioned_above> <img_name>
returns the same errors.
This is the setup I've used to run nextflow processes inside docker containers:
main:
process mount_example {
label 'dockerised'
input:
file foo from bar
containerOptions "-v /path/to/source:/path/to/target/inside/docker/"
script:
"""
ls /path/to/target/inside/docker/
"""
}
and in config:
docker {
enabled = true
registry = 'yourdockerregister'
}
process {
withLabel: dockerised
{
container: yourdockerregistry
}
}
One of the features of a host mount that docker provides is it will create the folder if it doesn't already exist. Otherwise, when doing a bind mount in Linux, which is what this option is doing:
--mount type=bind,src=<src_path>,target=/output
Linux will not create the directory for you and the mount command will fail. To resolve you can switch back to a host mount, or create the directory in advance.

Use nvidia-docker instead of docker with Ansible

I'm trying to figure out how to use nvidia-docker (https://github.com/NVIDIA/nvidia-docker) using https://docs.ansible.com/ansible/latest/docker_container_module.html#docker-container.
Problem
My current Ansible playbook execute my container using "docker" command instead of "nvidia-docker".
What I have done
According to some readings, I have tried adding my devices, without success
docker_container:
name: testgpu
image: "{{ image }}"
devices: ['/dev/nvidiactl', '/dev/nvidia-uvm', '/dev/nvidia0', '/dev/nvidia-uvm-tools]
state: started
note I tried different syntax for devices (inline ..), but still getting the same problem
This command does not throws any error. As expected it creates a Docker container with my image and try to start it.
Looking at my container logs:
terminate called after throwing an instance of 'std::runtime_error'
what(): No CUDA driver found
which is the exact same error I'm getting when running
docker run -it <image>
instead of
nvidia-docker run -it <image>
Any ideas how to override docker command when using docker_container with Ansible?
I can confirm my CUDA drivers are installed, and all the path /dev/nvidia* are valid.
Thanks
docker_container module doesn't use docker executable, it uses Docker daemon API through docker-py Python library.
Looking at nvidia-docker wrapper script, it sets --runtime=nvidia and -e NVIDIA_VISIBLE_DEVICES.
To set NVIDIA_VISIBLE_DEVICES you can use env argument of docker_container.
But I see no ways to set runtime via docker_container module as of current Ansible 2.4.
You can try to overcome this by setting "default-runtime": "nvidia" in your daemon.json configuration file, so Docker daemon will use nvidia runtime by default.

Are you trying to mount a directory onto a file (or vice-versa)?

I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command:
docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest
It shows that
docker: Error response from daemon: oci runtime error:
container_linux.go:262: starting container process caused
"process_linux.go:339: container init caused \"rootfs_linux.go:57:
mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\"
at
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?
This should no longer happen (since v2.2.0.0), see here
If you are using Docker for Windows, this error can happen if you have recently changed your password.
How to fix:
First make sure to delete the broken container's volume
docker rm -v <container_name>
Update: The steps below may work without needing to delete volumes first.
Open Docker Settings
Go to the "Shared Drives" tab
Click on the "Reset Credentials..." link on the bottom of the window
Re-Share the drives you want to use with Docker
You should be prompted to enter your username/password
Click "Apply"
Go to the "Reset" tab
Click "Restart Docker"
Re-create your containers/volumes
Credit goes to BaranOrnarli on GitHub for the solution.
TL;DR: Remove the volumes associated with the container.
Find the container name using docker ps -a then remove that container using:
docker rm -v <container_name>
Problem:
The error you are facing might occur if you previously tried running the docker run command while the file was not present at the location where it should have been in the host directory.
In this case docker daemon would have created a directory inside the container in its place, which later fails to map to the proper file when the correct files are put in the host directory and the docker command is run again.
Solution:
Remove the volumes that are associated with the container. If you are not concerned about other container volumes, you can also use:
# WARNING, THIS WILL REMOVE ALL VOLUMES
docker volume rm $(docker volume ls -q)
Because docker will recognize $PWD/conf/nginx.conf as a folder and not as a file. Check whether the $PWD/conf/ directory contains nginx.conf as a directory.
Test with
> cat $PWD/conf/nginx.conf
cat: nginx.conf/: Is a directory
Otherwise, open a Docker issue.
It's working fine for me with same configuration.
The explanation given by #Ayushya was the reason I hit this somewhat confusing error message and the necessary housekeeping can be done easily like this:
$ docker container prune
$ docker volume prune
Answer for people using Docker Toolbox
There have been at least 3 answers here touching on the problem, but not explaining it properly and not giving a full solution. This is just a folder mounting problem.
Description of the problem:
Docker Toolbox bypasses the Hyper-V requirement of Docker by creating a virtual machine (in VirtualBox, which comes bundled). Docker is installed and ran inside the VM. In order for Docker to function properly, it needs to have access to the from the host machine. Which here it doesn't.
After I installed Docker Toolbox it created the VirtualBox VM and only mounted C:\Users to the machine, as \c\Users\. My project was in C:\projects so nowhere on the mounted volume. When I was sending the path to the VM, it would not exist, as C:\projects isn't mounted. Hence, the error above.
Let's say I had my project containing my ngnix config in C:/projects/project_name/
Fixing it:
Go to VirtualBox, right click on Default (the VM from Docker) > Settings > Shared Folders
Clicking the small icon with the plus on the right side, Add a new share. I used the following settings:
The above will map C:\projects to /projects (ROOT/projects) in the VM, meaning that now you can reference any path in projects like this: /projects/project_name - because project_name from C:\projects\project_name is now mounted.
To use relative paths, please consider naming the path c/projects not projects
Restart everything and it should now work properly. I manually stopped the virtual machine in VirtualBox and restarted the Docker Toolbox CLI.
In my docker file, I now reference the nginx.conf like this:
volumes:
- /projects/project_name/docker_config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
Where nginx.conf actually resides in C:\projects\project_name\docker_config\nginx\nginx.conf
I had the same problem. I was using Docker Desktop with WSL in Windows 10 17.09.
Cause of the problem:
The problem is that Docker for Windows expects you to supply your volume paths in a format that matches this:
/c/Users/username/app
BUT, WSL instead uses the format:
/mnt/c/Users/username/app
This is confusing because when checking the file in the console I saw it, and for me everything was correct. I wasn't aware of the Docker for Windows expectations about the volume paths.
Solution to the problem:
I binded the custom mount points to fix Docker for Windows and WSL differences:
sudo mount --bind /mnt/c /c
Like suggested in this amazing guide: Setting Up Docker for Windows and WSL to Work Flawlessly and everything is working perfectly now.
Before I started using WSL I was using Git Bash and I had this problem as well.
On my Mac I had to uncheck the box "Use gRPC FUSE for file sharing" in Settings -> General
Maybe someone finds this useful. My compose file had following volume mounted
./file:/dir/file
As ./file did not exist, it was mounted into ABC (by default as folder).
In my case I had a container resulted from
docker commit ABC cool_image
When I later created ./file and ran docker-compose up , I had the error:
[...] Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
The container brought up from cool_image remembered that /dir/file was a directory and it conflicted with lately created and mounted ./file .
The solution was:
touch ./file
docker run abc_image --name ABC -v ./file:/dir/file
# ... desired changes to ABC
docker commit ABC cool_image
I am using Docker ToolBox for Windows. By default C Drive is mounted automatically, so in order to mount the files, make sure your files and folders are inside C DRIVE.
Example: C:\Users\%USERNAME%\Desktop
I'll share my case here as this may save a lot of time for someone else in the future.
I had a perfectly working docker-compose on my macos, until I start using docker-in-docker in Gitlab CI. I was only given permissions to work as Master in a repository, and the Gitlab CI is self-hosted and setup by someone else and no other info was shared, about how it's setup, etc.
The following caused the issue:
volumes:
- ./.docker/nginx/wordpress/wordpress.conf:/etc/nginx/conf.d/default.conf
Only when I noticed that this might be running under windows (hours scratching the head), I tried renaming the wodpress.conf to default.conf and just set the dir pathnames:
volumes:
- ./.docker/nginx/wordpress:/etc/nginx/conf.d
This solved the problem!
I had the same issue, docker-compose was creating a directory instead of file, then crashing mid-way.
What I did:
Run the container without any mapping.
Copy the .conf file to the host location:
docker cp containername:/etc/nginx/nginx.conf ./nginx.conf
Remove the container (docker-compose down).
Put the mapping back.
Re-mount the container.
Docker Compose will find the .conf file and map it, instead of trying to create a directory.
unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I had a similar error on niginx in Mac environment.
Docker didn't recognize the default.conf file correctly. Once changing the relative path to the absolute path, the error was fixed.
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
In Windows 10, I just get this error without changing anything in my docker-compose.yml file or Docker configuration in general.
In my case, I was using a VPN with a firewall policy that blocks port 445.
After disconnecting from the VPN the problem disappears.
So I recommend checking your firewall and not using a proxy or VPN when running Docker Desktop.
Check Docker for windows - Firewall rules for shared drives for more details.
I hope this will help someone else.
Could you please use the absolute/complete path instead of $PWD/conf/nginx.conf? Then it will work.
EX:docker run --name nginx-container5 --rm -v /home/sree/html/nginx.conf:/etc/nginx/nginx.conf -d -p 90:80 nginx
b9ead15988a93bf8593c013b6c27294d38a2a40f4ac75b1c1ee362de4723765b
root#sree-VirtualBox:/home/sree/html# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9ead15988a9 nginx "nginx -g 'daemon of…" 7 seconds ago Up 6 seconds 0.0.0.0:90->80/tcp nginx-container5
e2b195a691a4 nginx "/bin/bash" 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp test-nginx
I experienced the same issue using Docker over WSL1 on Windows 10 with this command line:
echo $PWD
/mnt/d/nginx
docker run --name nginx -d \
-v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
I resolved it by changing the path for the file on the host system to a UNIX style absolute path:
docker run --name nginx -d \
-v /d/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
or using an Windows style absolute path with / instead of \ as path separators:
docker run --name nginx -d \
-v D:/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
To strip the /mnt that seems to cause problems from the path I use
bash variable extension:
-v ${PWD/mnt\/}/conf/nginx.conf:/etc/nginx/nginx.conf
Updating Virtual Box to 6.0.10 fixed this issue for Docker Toolbox
https://github.com/docker/toolbox/issues/844
I was experiencing this kind of error:
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ touch resolv.conf
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv.conf ubuntu /bin/bash
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/mlepisto/G/Projects/resolv.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged\\\" at \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged/etc/resolv.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
# mounting to some other file name inside the container did work just fine
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects/
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv2.conf ubuntu /bin/bash
root#a5020b4d6cc2:/# exit
exit
After updating VitualBox all commands did work just fine 🎉
Had the same head scratch because I did not have the file locally so it created it as a folder.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile
mimas#Anttis-MBP:~/random/dockerize/tube$ docker run --rm -v $(pwd)/logs.txt:/usr/app/logs.txt devopsdockeruh/first_volume_exercise
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/mimas/random/dockerize/tube/logs.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged\\\" at \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged/usr/app/logs.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile logs.txt/
For me, this did not work:
volumes:
- ./:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/site.conf
But this, works fine (obviously moved my config file inside a new directory too:
volumes:
- ./:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/site.conf
I had this problem under Windows 7 because my dockerfile was on different drive.
Here's what I did to fix the problem:
Open VirtualBox Manager
Select the "default" container and edit the settings.
Select Shared Folders and click the icon to add a new shared folder
Folder Path: x:\
Folder Name: /x
Check Auto-mount and Make Permanent
Restart the virtual machine
At this point, docker-compose up should work.
I got the same error on Windows10 after an update of Docker: 2.3.0.2 (45183).
... caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I was using absolute paths like this //C/workspace/nginx/nginx.conf and everything worked like a charm.
The update broke my docker-compose, and I had to change the paths to /C/workspace/nginx/nginx.conf with a single / for the root.
Note that this situation will also occur if you try to mount a volume from the host which has not been added to the Resources > File Sharing section of Docker Preferences.
Adding the root path as a file sharing resource will now permit Docker to access the resource to mount it to the container. Note that you may need to erase the contents on your Docker container to attempt to re-mount the volume.
For example, if your application is located at /mysites/myapp, you will want to add /mysites as the file sharing resource location.
In my case it was a problem with Docker for Windows and use partition encrypted by Bitlocker. If you have project files on encrypted files after restart and unlock drive Dokcer doesn't see project files properly.
All you need to do is just need to restart Docker
CleanWebpackPlugin can be the problem. In my case, in my Docker file I copy a file like this:
COPY --chown=node:node dist/app.js /usr/app/app.js
and then during development I mount that file via docker-compose:
volumes:
- ./dist/app.js:/usr/app/app.js
I would intermittently get the Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error or some version of it.
The problem was that the CleanWebpackPlugin was deleting the file and before webpack re-built. If Docker was trying to mount the file while it was deleted Docker would fail. It was intermittent.
Either remove CleanWebpackPlugin completely or configure its options to play nicer.
I had this happen when the json file on the host had the executable permission set. I don't know the reason behind this.
For me, it was enough to just do this:
docker compose down
docker compose up -d
l have solved the mount problem. I am using a Win 7 environment, and the same problem happened to me.
Are you trying to mount a directory onto a file?
The container has a default sync directory at C:\Users\, so I moved my project to C:\Users\, then recreated the project. Now it works.

Resources