I want to load the custom app armor profile which i have loaded in docker host to inside the ubuntu docker container.When i try to run the below command and check the apparmor_status i'm getting the below error
docker run -d --privileged --security-opt "apparmor=docker-nginx-sample" -p 80:80 -d --name apparmor-nginx apparmubun sh -c 'sleep infinity'
# docker exec -it c67fb9a526ad sh
#
#
# apparmor_status
apparmor module is loaded.
apparmor filesystem is not mounted.
So i tried to restart the apparmor service in the docker command option while running docker run as below
docker run -d --privileged -v /sys:/sys:ro --security-opt="apparmor:docker-nginx-sample" -p 80:80 --name=apparmor apparmubun sh -c 'service apparmor restart;apparmor_status;sleep infinity'
Getting the below error
docker logs b8611278d34e
/etc/init.d/apparmor: 428: /lib/lsb/init-functions: cannot create /dev/null: Permission denied
Cache read/write disabled: interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.)
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?
Use --subdomainfs to override.
Cache read/write disabled: interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.)
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?
Use --subdomainfs to override.
Cache read/write disabled: interface file missing. (Kernel needs AppArmor 2.4 compatibility patch.)
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?
Use --subdomainfs to override.
BUt if i try after to execute the service restart command after created the container than its working fine
~# docker exec -it 01b6bdf72586 sh -c 'service apparmor restart;apparmor_status'
* Restarting AppArmor * Reloading AppArmor profiles... [ OK ]
[ OK ]
apparmor module is loaded.
44 profiles are loaded.
25 profiles are in enforce mode.
Could you please help me how to do this while running docker run command itself
Thanks in advance
Mount a cgroup
pico /etc/fstab
add the line “lxc /sys/fs/cgroup cgroup defaults 0 0”
mount -a
Create a Directory to Store Hosts
mkdir -p /var/lxc/guests
Create a File System for the Container
Let’s create a container called “test”.
First, create a filesystem for the container. This may take some time
apt-get install debootstrap
mkdir -p /var/lxc/guests/test
debootstrap wheezy /var/lxc/guests/test/fs/ http://archive.raspbian.org/raspbian
Modify the Container’s File System
chroot /var/lxc/guests/test/fs/
Change the root password.
passwd
Change the hostname as you wish.
pico /etc/hostname
undo chroot
exit
Create a Minimal Configuration File
pico /var/lxc/guests/test/config
Enter the following:
lxc.utsname = test
lxc.tty = 2
lxc.rootfs = /var/lxc/guests/test/fs
Create the Container
lxc-create -f /var/lxc/guests/test/config -n test
Test the Container
lxc-start -n test -d
And this error came up
lxc-start: symbol lookup error: lxc-start: undefined symbol: current_config
Attempting to add an insercure docker registry to a dind image that I run in a concourse task:
I tried beginning my task by running:
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=${INSECURE_REG}"
and tried spinning up the daemon and compose:
docker daemon --insecure-registry=${INSECURE_REG} &
docker-compose up
However the task errors: server gave http response to https client, and no such image
The whole task looks like this (basically it is a shell script executed in the dind container that ends with a docker-compose):
# Connect to insecure docker registry:
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry=${INSECURE_REG}"
# Install docker-compose:
apk add --no-cache py-pip curl
pip install docker-compose
# Verify docker registry:
curl http://${INSECURE_REG}/v2/_catalog #curl does return the expected json
sanitize_cgroups() {
mkdir -p /sys/fs/cgroup
mountpoint -q /sys/fs/cgroup || \
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
mount -o remount,rw /sys/fs/cgroup
sed -e 1d /proc/cgroups | while read sys hierarchy num enabled; do
if [ "$enabled" != "1" ]; then
# subsystem disabled; skip
continue
fi
grouping="$(cat /proc/self/cgroup | cut -d: -f2 | grep "\\<$sys\\>")"
if [ -z "$grouping" ]; then
# subsystem not mounted anywhere; mount it on its own
grouping="$sys"
fi
mountpoint="/sys/fs/cgroup/$grouping"
mkdir -p "$mountpoint"
# clear out existing mount to make sure new one is read-write
if mountpoint -q "$mountpoint"; then
umount "$mountpoint"
fi
mount -n -t cgroup -o "$grouping" cgroup "$mountpoint"
if [ "$grouping" != "$sys" ]; then
if [ -L "/sys/fs/cgroup/$sys" ]; then
rm "/sys/fs/cgroup/$sys"
fi
ln -s "$mountpoint" "/sys/fs/cgroup/$sys"
fi
done
}
# https://github.com/concourse/concourse/issues/324
sanitize_cgroups
# Spin up the stack as described in docker-compose:
docker daemon --insecure-registry=${INSECURE_REG} &
docker-compose up
dockerd --insecure-registry=${INSECURE_REG}
Is the correct way of starting docker daemon with insecure registry, even though it reported errors, it got the images and started them successfully
Processes in docker containers are still running under the "host's" UID although I have enabled user namespace remapping.
OS is: Ubuntu 16.04 on 4.4.0-21 with
> sudo docker --version
Docker version 1.12.0, build 8eab29e
dockerd configuration is
> grep "DOCKER_OPTS" /etc/default/docker
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ipv6 --userns-remap=default"
subordinate UID and GID mappings have been created, when I had run manually, i.e., with the above docker opts string
> grep "dock" /etc/sub*
/etc/subgid:dockremap:362144:65536
/etc/subuid:dockremap:362144:65536
However, the sub UID/GIDs got not created when I (re)started dockerd as service - but had to run it manually.
Also after restarting dockerd, all processes in containers are not in the remapped range but 1to1 that of the host, i.e., a container root process still has UID=0.
E.g., a test container running just top
> sudo docker run -t -i ubuntu /usr/bin/top
...
has top run by UID=0 when checked outside the container on the host
> ps -xaf --forest -o pid,ruid,ruser,cmd | grep top
PID RUID RUSER CMD
23015 0 root | \_ sudo docker run -t -i ubuntu /usr/bin/top
23016 0 root | \_ docker run -t -i ubuntu /usr/bin/top
Apparently, the remapping to subordinate UIDs is not working for me when running docker as a daemon?
/etc/default/docker is not used when running the dockerd via systemd.
Thus any changes I did on the docker-config (after the dist-upgrade I had applied before) where not applied.
For configuring the docker daemon with Systemd see the documantation at
https://docs.docker.com/engine/admin/systemd/
with the configuration drop-in file(s) going to
/etc/systemd/system/docker.service.d
I'm currently learning Docker, and have made a nice and simple Docker Compose setup. 3 containers, all with their own Dockerfile setup. How could I go about converting this to work on CoreOS so I can setup up a cluster later on?
web:
build: ./app
ports:
- "3030:3000"
links:
- "redis"
newrelic:
build: ./newrelic
links:
- "redis"
redis:
build: ./redis
ports:
- "6379:6379"
volumes:
- /data/redis:/data
taken from https://docs.docker.com/compose/install/
the only thing is that /usr is read only, but /opt/bin is writable and in the path, so:
sd-xx~ # mkdir /opt/
sd-xx~ # mkdir /opt/bin
sd-xx~ # curl -L https://github.com/docker/compose/releases/download/1.3.3/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 403 0 403 0 0 1076 0 --:--:-- --:--:-- --:--:-- 1080
100 7990k 100 7990k 0 0 2137k 0 0:00:03 0:00:03 --:--:-- 3176k
sd-xx~ # chmod +x /opt/bin/docker-compose
sd-xx~ # docker-compose
Define and run multi-container applications with Docker.
Usage:
docker-compose [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name (default: directory name)
--verbose Show more output
-v, --version Print version and exit
Commands:
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
up Create and start containers
migrate-to-labels Recreate containers to add labels
I've created simple script for installing latest Docker Compose on CoreOS:
https://gist.github.com/marszall87/ee7c5ea6f6da9f8968dd
#!/bin/bash
mkdir -p /opt/bin
curl -L `curl -s https://api.github.com/repos/docker/compose/releases/latest | jq -r '.assets[].browser_download_url | select(contains("Linux") and contains("x86_64"))'` > /opt/bin/docker-compose
chmod +x /opt/bin/docker-compose
Just run it with sudo
The proper way to install or run really anything on CoreOS is either
Install it as a unit
Run in a separate docker container
For docker-compose you probably want to install it as a unit, just like you have docker as a unit. See Digital Ocean's excellent guides on CoreOS and the systemd units chapter to learn more.
Locate your cloud config based on your cloud provider or custom installation, see https://coreos.com/os/docs/latest/cloud-config-locations.html for locations.
Install docker-compose by adding it as a unit
#cloud-config
coreos:
units:
- name: install-docker-compose.service
command: start
content: |
[Unit]
Description=Install docker-compose
ConditionPathExists=!/opt/bin/docker-compose
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/mkdir -p /opt/bin/
ExecStart=/usr/bin/curl -o /opt/bin/docker-compose -sL "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-linux-x86_64"
ExecStart=/usr/bin/chmod +x /opt/bin/docker-compose
Note that I couldn't get the uname -s and uname -m expansions to work in the curl statement so I just replaced them with their expanded values.
Validate your config file with
coreos-cloudinit -validate --from-file path-to-cloud-config
It should output something like
myhost core # coreos-cloudinit -validate --from-file path-to-cloudconfig
2016/12/12 12:45:03 Checking availability of "local-file"
2016/12/12 12:45:03 Fetching user-data from datasource of type "local-file"
myhost core #
Note that coreos-cloudinit doesn't validate the contents-blocks in your cloud-config. Restart CoreOS when you're finished, and you're ready to go.
Update: As #Wolfgang comments, you can run coreos-cloudinit --from-file path-to-cloud-config instead of restarting CoreOS.
I would also suggest docker-compose in a docker container like the one from dduportal.
For the sake of usability I extended my cloud-config.yml as follows:
write_files:
- path: "/etc/profile.d/aliases.sh"
content: |
alias docker-compose="docker run -v \"\$(pwd)\":\"\$(pwd)\" -v /var/run/docker.sock:/var/run/docker.sock -e COMPOSE_PROJECT_NAME=\$(basename \"\$(pwd)\") -ti --rm --workdir=\"\$(pwd)\" dduportal/docker-compose:latest"
After updating the cloud-config via sudo coreos-cloudinit -from-url http-path-to/cloud-config.yml and a system reboot, you are able to use the docker-compose command like you are used to on every other machine.
CentruyLabs created a rubygem called fig2coreos
It translates fig.yml to .service files
fig is deprecated since docker-compose was created but the syntax seems to be the same so that it could probably work.
Simple 3 Steps:
sudo mkdir -p /opt/bin
Grab the command in the official website https://docs.docker.com/compose/install/ and change the output path from /usr/local/bin/docker-compose to /opt/bin :
sudo curl -L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname -s)-$(uname -m)" -o /opt/bin/docker-compose
Make executable:
sudo chmod +x /opt/bin/docker-compose
Now you have docker-compose :)
here it is,
the best way I found:
core#london-1 ~ $ docker pull dduportal/docker-compose
core#london-1 ~ $ cd /dir/where-it-is-your/docker-compose.yml
core#london-1 ~ $ docker run -v "$(pwd)":/app \
-v /var/run/docker.sock:/var/run/docker.sock \
-e COMPOSE_PROJECT_NAME=$(basename "$(pwd)")\
-ti --rm \
dduportal/docker-compose:latest up
done!
well, coreOS supports docker but it is bare bone linux with clustering suppport so you need to include a base image for all your containers ( use FROM and in Dockerfile you might also need to do RUN yum -y install bzip2 gnupg etc., ) that has the bins and libs that are needed by you app and redis ( better take some ubuntu base image )
Here you can put all of them in one container/docker or seperate if you do it seperate then you need to link the containers and optionally volume mount - docker has some good notes about it (https://docs.docker.com/userguide/dockervolumes/)
Atlast, you need to write cloud config which specifies the systemd units . In your case you will have 3 units that will be started by systemd ( systemd replaces the good old init system in coreOS) and feed it to coreos-cloudinit ( tip: coreos-cloudinit -from-file=./cloud-config -validate=false ), You also need to provide this cloud-config on the linux bootcmd for persistency.
Currently, the easiest way to use docker-compose agains a CoreOS Vagrant VM. You just need to make sure to forward Docker port.
If you are not particularly attached to using docker-compose, you can try CoreOS running Kubernetes. There are multiple options and I have implemented one of those for Azure.
For using docker-compose with Fedora CoreOS you may run into issues with python, however running docker-compose from a container works perfectly.
There is a handy bash wrapper script and it is documented in the official documentation here: https://docs.docker.com/compose/install/#alternative-install-options under the "Install as a container" section.