LXC on raspberrypi using raspbian OS - lxc

Mount a cgroup
pico /etc/fstab
add the line “lxc /sys/fs/cgroup cgroup defaults 0 0”
mount -a
Create a Directory to Store Hosts
mkdir -p /var/lxc/guests
Create a File System for the Container
Let’s create a container called “test”.
First, create a filesystem for the container. This may take some time
apt-get install debootstrap
mkdir -p /var/lxc/guests/test
debootstrap wheezy /var/lxc/guests/test/fs/ http://archive.raspbian.org/raspbian
Modify the Container’s File System
chroot /var/lxc/guests/test/fs/
Change the root password.
passwd
Change the hostname as you wish.
pico /etc/hostname
undo chroot
exit
Create a Minimal Configuration File
pico /var/lxc/guests/test/config
Enter the following:
lxc.utsname = test
lxc.tty = 2
lxc.rootfs = /var/lxc/guests/test/fs
Create the Container
lxc-create -f /var/lxc/guests/test/config -n test
Test the Container
lxc-start -n test -d
And this error came up
lxc-start: symbol lookup error: lxc-start: undefined symbol: current_config

Related

Unable to mkdir on a single node Kubernetes cluster

This
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
suggests doing:
sudo mkdir /mnt/data
on a single node cluster.
Using a single node Kubernetes cluster (Kubernetes on Docker Desktop) I get:
sudo mkdir /mnt/data
bash: sudo: command not found
Doing
mkdir /mnt/data
I get:
mkdir: can't create directory '/mnt/data': Read-only file system
Any suggestions?

solita/docker-systemd docker-compose

i want to use an image in docker-compose with systemctl in it, i saw this image in the internet https://github.com/solita/docker-systemd, it work good , but when i tried to use it with docker-compose it dosnt work(it worket but systemctl dosen't, it give this error "System has not been booted with systemd as init system (PID 1). Can't operate." )
test1:
container_name: 'test1'
build: './test'
volumes:
- /:/host
- /sys/fs/cgroup:/sys/fs/cgroup:ro
security_opt:
- "seccomp=unconfined"
tmpfs:
- /run
- /run/lock
privileged: true
and the build file is test.sh
#!/bin/sh
set -eu
if nsenter --mount=/host/proc/1/ns/mnt -- mount | grep /sys/fs/cgroup/systemd >/dev/null 2>&1; then
echo "The systemd cgroup hierarchy is already mounted at /sys/fs/cgroup/systemd."
else
if [ -d /host/sys/fs/cgroup/systemd ]; then
echo "The mount point for the systemd cgroup hierarchy already exists at /sys/fs/cgroup/systemd."
else
echo "Creating the mount point for the systemd cgroup hierarchy at /sys/fs/cgroup/systemd."
mkdir -p /host/sys/fs/cgroup/systemd
fi
echo "Mounting the systemd cgroup hierarchy."
nsenter --mount=/host/proc/1/ns/mnt -- mount -t cgroup cgroup -o none,name=systemd /sys/fs/cgroup/systemd
fi
echo "Your Docker host is now configured for running systemd containers!"
If you want to run "systemctl" in docker to start/stop services then you could do that without systemd. The docker-systemctl-replacement is made for that.
If you need to use systemd, you can follow the repo it's for Rockylinux.
However you can use this repo to have systemd enable for Ubuntu
When you use 'docker run' you need to set like below ((enable rw) and connect to it with docker exec.)
`--volume=/sys/fs/cgroup:/sys/fs/cgroup:rw`
You can encounter other problems of compatibilities with systemd in docker (for timedatectl, loginctl etc...) or service command
$> dnf install initscripts systemd
$> systemctl start systemd-logind
You can after migrate it to docker-compose mode

Alpine Linux - root mounted as ro iso9660 filesystem how can I remount as rw overlay?

I'm on OSX and I've got Docker for Mac installed.
On OSX, Docker runs it's containers inside a little hypervisor, we can see this from a process listing
❯ ps awux | grep docker
bryanhunt 512 1.8 0.2 10800436 34172 ?? S Fri11am 386:09.03 com.docker.hyperkit -A -u -F vms/0/hyperkit.pid -c 8 -m 6144M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-vpnkit,path=s50,uuid=c0fac0ff-fb9a-473f-bf44-43d7abdc701d -U 05c2af3a-d417-43fd-b0d4-9d443577f207 -s 2:0,ahci-hd,/Users/bryanhunt/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw -s 3,virtio-sock,guest_cid=3,path=vms/0,guest_forwards=2376;1525 -s 4,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker-for-mac.iso -s 5,ahci-cd,vms/0/config.iso -s 6,virtio-rnd -s 7,virtio-9p,path=s51,tag=port -l com1,autopty=vms/0/tty,asl -f bootrom,/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd,,
bryanhunt 509 0.0 0.1 558589408 9608 ?? S Fri11am 0:30.26 com.docker.driver.amd64-linux -addr fd:3 -debug
Note how it's running the VM from an ISO image /Applications/Docker.app/Contents/Resources/linuxkit/docker-for-mac.iso - this is probably a good idea because things would get tricky if users tampered with the VM image, however, in this case, that's exactly what I want to do.
I can get inside the Docker VM by running a privileged container which executes the nsenter utility in order to enter the host process space.
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
So everything is good. I can now move onto the next stage, install and run plotnetcfg.
plotnetcfg creates very nice graphviz diagrams of networking configuration, and this is what I'd like to do, analyze the networking configuration inside the Docker VM (it's Alpine Linux BTW).
Here's an example of the sort of diagram plotnetcfg can generate :
That's my actual goal - to visualize Docker networking configuration for a hackathon.
Now finally the description of the problem.
The root filesystem is an iso9660 mount.
/ # mount |grep iso
/dev/sr0 on / type iso9660 (ro,relatime)
Is there a way to remount root, using the aufs stacked filesystem or any other means so that I can update the system packages, download, compile and execute the plotnetcfg utility, and finally, export the generated graphviz dot file and render it elsewhere?
For the question: root mounted as ro iso9660 filesystem how can I remount as rw overlay ?
The answer is: there is no way you can remount as rw, but tmpfs /tmp or shm /dev/shm is writable if you really want to add something temporally.
For the things you want to do:
With docker run you can already access the docker vm's network.
You don't need to modify the host to change the network, you can just add --privileged -v /dev:/dev for docker run, then you can just install package in container, create the interface you want
docker run --rm -it --privileged -v /dev:/dev wener/base ifconfig
For example you can create a tap or tun dev in container, I use tinc in container to create host vpn.

Docker can't load config file, but container works fine

I'm using docker-machine with generic driver to deploy containers on an existing remote host. But when I switch to the remote host and try to run a container, this happens:
$ docker-machine create --driver generic --generic-ip-address=$REMOTEIP \
--generic-ssh-key ~/.ssh/id_rsa --generic-ssh-user $REMOTEUSER vm
[works fine]
$ eval $(docker-machine env vm) #switch to remote host
[works fine]
$ docker run -it busybox sh
WARNING: Error loading config file:/home/user/.docker/config.json - open /home/user/.docker/config.json: permission denied
[Even with the warning, runs fine]
The container runs fine anyway, but I want to solve that warning message.
Given that user doesn't exist in the remote host, I guess that this file doesn't exist. But ..
1) why does the engine search for it in the first place? shouldn't it search for the config.json of $REMOTEUSER instead of that?
2) and why the container runs properly on the remote host anyway?
Docker is not searching for that file in the remote host but in the local host. Turns out that file exists and it's owned by root.
$ ls -lsa ~/.docker/config.json 4 -rw------- 1 root root \
95 dic 29 15:29 /home/user/.docker/config.json
That's why it says permission denied.
A simple chown fixes the issue:
$ sudo chown user:user /home/user/.docker -R

What's the best way to share files from Windows to Boot2docker VM?

I have make my code ready on Windows, but I find it's not easy to share to boot2docker.
I also find that boot2docker can't persistent my changes. For example, I create a folder, /temp, after I restart boot2docker. This folder disappears, and it's very inconvenient.
What is your way when you have some code on Windows, but you need to dockerize them?
---update---
I try to update the setting in VirtualBox and restart boot2docker, but it's not working on my machine.
docker#boot2docker:/$ ls -al /c
total 4
drwxr-xr-x 3 root root 60 Jun 17 05:42 ./
drwxrwxr-x 17 root root 400 Jun 17 05:42 ../
dr-xr-xr-x 1 docker staff 4096 Jun 16 09:47 Users/
Boot2Docker is a small Linux VM running on VirtualBox. So before you can use your files (from Windows) in Docker (which is running in this VM), you must first share your code with the Boot2Docker VM itself.
To do so, you mount your Windows folder to the VM when it is shutdown (here a VM name of default is assumed):
C:/Program Files/Oracle/VirtualBox/VBoxManage sharedfolder \
add default -name win_share -hostpath c:/work
(Alternatively you can also open the VirtualBox UI and mount the folder to your VM just as you did in your screenshot!)
Now ssh into the Boot2Docker VM for the Docker Quickstart Terminal:
docker-machine ssh default
Then perform the mount:
Make a folder inside the VM: sudo mkdir /VM_share
Mount the Windows folder to it: sudo mount -t vboxsf win_share /VM_share
After that, you can access C:/work inside your Boot2Docker VM:
cd /VM_share
Now that your code is present inside your VM, you can use it with Docker, either by mounting it as a volume to the container:
docker-machine ssh default
docker run --volume /VM_share:/folder/in/container some/image
Or by using it while building your Docker image:
...
ADD /my_windows_folder /folder
...
See this answer.
I have Windows 10 Home edition with Docker toolbox 1.12.2 and VirtualBox 5.1.6.
I was able to mount a folder under C:\Users successfully in my container without doing any extra steps such as docker-machine ssh default.
Example:
docker run -it --rm -v /c/Users/antonyj/Documents/code:/mnt ubuntu /bin/bash
So having your files under C:\Users probably is the simplest thing to do.
If you do not want to have your files under C:\Users, then you have to follow the steps in the accepted answer.
Using Docker Toolbox, the shared directory can only be /c/User:
Invalid directory. Volume directories must be under your Users directory
Enter image description here
Step1&Step2 Command in the "Docker Quickstart Terminal" in the implementation of Step1 & Step2 can be:
# Step 1. VirtualBox. Add the error in the command line, in the VirtualBox image interface manually add, as shown above.
"C:/Program Files/Oracle/VirtualBox/VBoxManage.exe" sharedfolder add default --name "E_DRIVE" --hostpath "e:\\" --automount
# Try 1. Only a temporary effect. Restart VM after sharing failure.
#docker-machine ssh default "sudo mkdir -p /e" # Create a directory identifier, consistent with the Windows drive letter
#docker-machine ssh default "sudo mount -t vboxsf -o uid=1000,gid=50 E_DRIVE /e"
# Try 2. Modify /etc/fstab. Do not use the permanent mount. Each restart /etc/fstab content will be reset
#docker-machine ssh default "sudo sed -i '$ a\E_DRIVE /e vboxsf uid=1000,gid=50 0 0' /etc/fstab"
# Step 2. `C:\Program Files\Docker Toolbox\start.sh` https://github.com/docker/machine/issues/1814#issuecomment-239957893
docker-machine ssh default "cat <<EOF | sudo tee /var/lib/boot2docker/bootlocal.sh && sudo chmod u+x /var/lib/boot2docker/bootlocal.sh
#!/bin/sh
mkdir -p /e
mount -t vboxsf -o uid=1000,gid=50 E_DRIVE /e
EOF
"
Then restart the VM. Try this: docker run --name php-fpm --rm -it -v /e:/var/www/html php:7.1.4-fpm /bin/bash
References:
What's the best way to share files from Windows to Boot2docker VM?
http://hessian.cn/p/1502.html
Windows + Boot2Docker, How to add D:\ drive to be accessible from within docker?
In the System Tray, you should have the cute Docker whale swimming. Right click and select Settings.
Click on Apply. This will bring up the Credentials dialog and you will need to provide your current Windows credentials. Ensure that you give it correctly. I also suspect that you might need to be an administrator.
To mount our host directory (C:\data) in a container, we are going to use the -v (volume) flag while running the container. A sample run is shown here:
I have CentOS in my local Docker container.
docker run -v c:/data:/data **centos** ls /data
Mount shared folder Windows guest with Linux host (vm name 'default'):
Shutdown 'default' VM:
cd "C:\Program Files\Oracle\VirtualBox"
VBoxManage controlvm default poweroff
Add shared folder command line:
./VBoxManage sharedfolder add default -name win_share -hostpath "C:\docker\volumes"
Start VM (headless only command line interface):
/VBoxManage startvm headless default
Connect to ssh:
docker-machine ssh default
Create VM sharedfolder directory:
sudo mkdir /sharedcontent
Mount Windows folder to host VM:
sudo mount -t vboxsf win_share /sharedcontent

Resources