This
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
suggests doing:
sudo mkdir /mnt/data
on a single node cluster.
Using a single node Kubernetes cluster (Kubernetes on Docker Desktop) I get:
sudo mkdir /mnt/data
bash: sudo: command not found
Doing
mkdir /mnt/data
I get:
mkdir: can't create directory '/mnt/data': Read-only file system
Any suggestions?
Related
I'm following a tutorial on docker stack, swarm, compose, etc.
the teacher connects to a VM of the swarm and then deploys a docker stack from this directory docker#node1:~/srv/swarm-stack-1:
docker stack deploy -c example-voting-app-stack.yml voteapp
However, I can't get there because I don't know how to copy the yaml compose file from the course repository to the directory inside the VM.
How can I find out where is docker#node1 located in my PC ?
Here is what I tryed:
docker-machine start node1
Docker-machine ssh node1
So I get
Tommaso#N552VW-Tommaso MINGW64 /c/Applicazioni_Tommaso/Docker Toolbox
$ Docker-machine ssh node1
( '>')
/) TC (\ Core is distributed with ABSOLUTELY NO WARRANTY.
(/-_--_-\) www.tinycorelinux.net
then
docker#node1:~/srv/swarm-stack-1$ pwd
/home/docker
ok then, but where is /home/docker located in my PC ?
So I tryed to get around the obstacle by creating a yml file inside the VM and then editing it, rather than copying it from another directory.
# create the directory
docker#node1:~$ mkdir srv
docker#node1:~$ cd srv
docker#node1:~/srv$ mkdir swarm-stack-1
docker#node1:~/srv$ cd swarm-stack-1
docker#node1:~/srv/swarm-stack-1$
# create the yml file
touch example-voting-app-stack.yml
and here I stop because I don't know how to edit the file.
I can nor install vim or install a program to install vim.
This is what I tryed:
docker#node1:~/srv/swarm-stack-1$ vim example-voting-app-stack.yml
-bash: vim: command not found
docker#node1:~/srv/swarm-stack-1$ apt-get vim
-bash: apt-get: command not found
docker#node1:~/srv/swarm-stack-1$ yum install vim
-bash: yum: command not found
docker#node1:~/srv/swarm-stack-1$ apk install vim
-bash: apk: command not found
docker#node1:~/srv/swarm-stack-1$ sudo apt-get install vim
sudo: apt-get: command not found
docker#node1:~/srv/swarm-stack-1$ nano
-bash: nano: command not found
So, can somebody help me to understand how to copy files inside my VM (so finding out what is its path in my PC) or how to install a program to install vim and then install vim inside my VM ?
SOLVED
The solution here is not to ssh into the VM, and instead to change to the VM context with:
docker-machine env node1
eval $(docker-machine env node1)
By doing so, you are in the VM context, so docker node ls and all the swarm commands work, but you still have access to your local files, because you're not in the VM.
So, after running the two lines above, I can finally switch my current directory to the course repo containing the docker-compose file I want to input in the docker stack deploy command, and run:
docker stack deploy -c example-voting-app-stack.yml voteapp
When you are done, to change the context back to the local machine do:
docker-machine env -u
eval $(docker-machine env -u)
Unable to write/modify/create in the mounted folder via docker container
docker run -it --rm -v ${pwd}:/files ubuntu /bin/bash
when the docker is running in interactive mode
Change directory to files
cd files
Now try to write /create directory in mounted folder ('files')
mkdir test
you get ERROR :
mkdir: cannot create directory 'ff': Permission denied
How to overcome that error:??
I am using the following command to run my container
docker run -d -p 9001:8081 --name nexus -v /Users/user.name/dockerVolume/nexus:/nexus-data sonatype/nexus3
Container starts and fail immediately. with the following logs
mkdir: cannot create directory '../sonatype-work/nexus3/log':
Permission denied
mkdir: cannot create directory
'../sonatype-work/nexus3/tmp': Permission denied
Java HotSpot(TM)
64-Bit Server VM warning: Cannot open file
../sonatype-work/nexus3/log/jvm.log due to No such file or directory
I was following this link to set it up
I have given said permission to nexus directory.
I also tried the following SO link but that didn't help me either.
I was still getting the same error.
Docker Version 17.12.0-ce-mac47 (21805)
[EDIT]
I did made changes to the ownership of my nexus folder on my host
sudo chown -R 200 ~/dockerVolume/nexus
In my ubuntu server I had to perform:
chown -R 200:200 path/to/directory
Not only 200, but 200:200
If you have this problem trying to run Nexus3 inside of Kubernetes cluster, you should set UID with initContainers. Just add it to your spec:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chown -R 200:200 /nexus-data"]
volumeMounts:
- name: <your nexus pvc volume name>
mountPath: /nexus-data
That Dockerfile is available, in the repo sonatype/docker-nexus3.
And mounting a volume is documented as:
Mount a host directory as the volume.
This is not portable, as it relies on the directory existing with correct permissions on the host. However it can be useful in certain situations where this volume needs to be assigned to certain specific underlying storage.
$ mkdir /some/dir/nexus-data && chown -R 200 /some/dir/nexus-data
$ docker run -d -p 8081:8081 --name nexus -v /some/dir/nexus-data:/nexus-data sonatype/nexus3
So don't forget to do, before your docker run:
chown -R 200 /Users/user.name/dockerVolume/nexus
Mount a cgroup
pico /etc/fstab
add the line “lxc /sys/fs/cgroup cgroup defaults 0 0”
mount -a
Create a Directory to Store Hosts
mkdir -p /var/lxc/guests
Create a File System for the Container
Let’s create a container called “test”.
First, create a filesystem for the container. This may take some time
apt-get install debootstrap
mkdir -p /var/lxc/guests/test
debootstrap wheezy /var/lxc/guests/test/fs/ http://archive.raspbian.org/raspbian
Modify the Container’s File System
chroot /var/lxc/guests/test/fs/
Change the root password.
passwd
Change the hostname as you wish.
pico /etc/hostname
undo chroot
exit
Create a Minimal Configuration File
pico /var/lxc/guests/test/config
Enter the following:
lxc.utsname = test
lxc.tty = 2
lxc.rootfs = /var/lxc/guests/test/fs
Create the Container
lxc-create -f /var/lxc/guests/test/config -n test
Test the Container
lxc-start -n test -d
And this error came up
lxc-start: symbol lookup error: lxc-start: undefined symbol: current_config
With Help from Saied Kazemi I was able to checkpoint and migrate a container using criu on ubuntu 14 by this docker suspend and resume using criu
Now I am trying to migrate this container from one location to another.
I am using these steps:
export cid=$(docker run -d ubuntu tail -f /dev/null)
docker exec $cid touch /test.walid
mkdir /tmp/docker-migration
mkdir /tmp/docker-migration/$cid
docker checkpoint --image-dir=/tmp/docker-migration/$cid $cid
ssh walid#192.168.1.10 mkdir /tmp/docker-migration
ssh walid#192.168.1.10 mkdir /tmp/docker-migration/$cid
scp -r /tmp/docker-migration/$cid walid#192.168.1.10:/tmp/docker-migration
ssh walid#192.168.1.10 mkdir /tmp/$cid
scp -r /var/lib/docker/0.0/containers/$cid walid#192.168.1.13:/tmp
ssh -t walid#192.168.1.10 sudo mv /tmp/$cid /var/lib/docker/0.0/containers/
ssh -t walid#192.168.1.10 sudo docker restore --force=true --image-dir=/tmp/docker-migration/$cid $cid
and Got this response
Error response from daemon: No such container: fea338e81750b2377c2a845e30c49b7055519e39448091715c2c6a7896da3562
Error: failed to restore one or more containers
Both Machines have docker and criu installed and checkpoint works alone.
Docker container migration using CRIU is still under development. So far the focus of checkpoint and restore integration into Docker has been C/R'ing on the same machine.
That said, it is possible to manually migrate containers by not only copying the container image created by CRIU after checkpoint (as you have done) but also by copying the container directory created by Docker in /var/lib/docker/0.0/containers/$cid as well as the container's root filesystem in /var/lib/docker/0.0/image. Manually migrating container's filesystem is a bit tricky specially if you are using a union filesystem like AUFS or OverlayFS. Also, you need to restart the Docker daemon on the destination machine to see the container.
On the destination machine, you have to create or run a container. This container will be overwritten by the restored image.
So :
ssh -t walid#192.168.1.10 export NewID=$(docker run -d ubuntu tail -f /dev/null)
ssh -t walid#192.168.1.10 sudo docker restore --force=true --image-dir=/tmp/docker-migration/$cid $newID
In my case, that worked like a charm!
here is my test:
migration container across nodes.
restore:
vagrant ssh vm2 -- 'docker run --name=foo -d ubuntu tail -f /dev/null && docker rm -f foo'
docker create --name=CONTAINNER_NAME base_image
docker restore --force=true --image-dir=/tmp/{dump files}
github: https://github.com/hixichen/criu_test