I'm starting to dig into AppArmor and since nearly all my services run in a docker container I would like to create profiles for these containers, as mentioned in the docker docs.
Has anybody experience with this, so can I somehow use aa-genprof with a docker container, to semi-automate the process?
Greetings
mathas
I think this will do what you're asking. I don't think it's as seamless as aa-genprof but hopefully will speed up the process of creating profiles.
https://github.com/genuinetools/bane
Related
When I try to docker compose down specific profile, it stops and removes all container.
I want to remove only containers that are in referred profile.
docker compose --profile elk down # Let's say I have some services in elk profile
In above example I wanted to bring down only services that are tagged with elk profile.
Same issue here (not really an answer). Alternatively it would be great to have docker compose --profile foo up --remove-orphans or similar also working.
There was a similar issue about it but it literally just got closed due to inactivity:
https://github.com/docker/compose/issues/8432
We have been experiencing the same issues. I followed the thread on the bug report. It looks like it was not solved yet.
We moved from using compose to swarm (multiple 1 node manager clusters instead of only using compose). And since now we use stacks we don't need profiles anymore.
After we create a docker image, we can run it anywhere by global registry.
I am wondering if we can run it directly on a server which doesn't install docker?
New to docker, sorry if I have made any stupid mistake.
Thank you guys.
You don't necessarily need to use Docker to run Docker containers - the Docker image format is an open specification.
You will need a platform which can understand this specification - and the one provided by Docker is the reference implementation - but there are alternatives such as Rocket.
Ultimately you will need something that can understand and run Docker containers, so unless your servers already have this capability you will need to install new software on them for this purpose.
I have found a lot of articles which speaks about communication between docker containers (docker network, docker link). But i don't Know if it exists a good practice to control a container from another one, like run and stop a container.
If the only way is to use the rest api on the host, have you got a good article which explains that ? About the rest api i have found too much articles which explain that, most of them outdated.
To precise my intention, i have a jenkins container which builds and moves the built into an other folder for a second container which executes the built code. Basicaly, before the move i want to stop the container and after restart it.
Thanks for help.
i don't Know if it exists a good practice to control a container from
another one, like run and stop a container.
It's a "good enough" practice, and plenty of people do this. CoreOS's /usr/bin/toolbox is basically this, a few others like RancherOS do this as well.
If the only way is to use the rest api on the host have you got a good article which explains that ?
No, it is not. You can mount docker's socket into another docker container and then run docker commands on the host directly from inside the container. This practice is called "docker in docker", "dind", "nested containers" etc. There is a variation of this where people run full fledged versions of docker (docker engine/daemon + client) within an existing container, but that is not what you want to do here.
The gist of it is usually the same, the docker unix socket - /var/run/docker.sock is exposed/mounted within the "controlling container" i.e the container you want to use to control the docker daemon. You then install the docker command line client and use docker commands as normal; docker ps, docker start/stop/run should all work as expected.
It's not trivial to set it up [1], and there are associated security concerns [2][3], but there are plenty of people doing it.
Here are your references:
[1] https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ , See the section under Solution, everything before that is what you should not be doing.
[2] https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html
[3] https://raesene.github.io/blog/2016/03/06/The-Dangers-Of-Docker.sock/
I want to use Docker for isolating scientific applications for the use in a HPC Unix cluster. Scientific software often has exotic dependencies so isolating them with Docker appears to be a good idea. The programs are to be run as jobs and not as services.
I want to have multiple users use Docker and the users should be isolated from each other. Is this possible?
I performed a local Docker installation and had two users in the docker group. The call to docker images showed the same results for both users.
Further, the jobs should be run under the calling users's UID and not as root.
Is such a setup feasible? Has it been done before? Is this documented anywhere?
Yes there is! It's called Singularity and it was designed with scientific applications and multi user HPCs. More at http://singularity.lbl.gov/
OK, I think there will be more and more solutions pop up for this. I'll try to update the following list in the future:
udocker for executing Docker containers as users
Singularity (Kudos to Filo) is another Linux container based solution
Don't forget about DinD (Docker in Docker): jpetazzo/dind
You could dedicate one Docker per user, and within one of those docker containers, the user could launch a job in a docker container.
I'm also interested in this possibility with Docker, for similar reasons.
There are a few of problems I can think of:
The Docker Daemon runs as root, providing anyone in the docker group
with effective host root permissions (e.g. leak permissions by
mounting host / dir as root).
Multi user Isolation as mentioned
Not sure how well this will play with any existing load balancers?
I came across Shifter which may be worth a look an partly solves #1:
http://www.nersc.gov/research-and-development/user-defined-images/
Also I know there is discussion to use kernel user namespaces to provide mapping container:root --> host:non-privileged user but I'm not sure if this is happening or not.
There is an officially supported Docker image that allows one to run Docker in Docker (dind), available here: https://hub.docker.com/_/docker/. This way, each user can have their own Docker daemon. First, start the daemon instance:
docker run --privileged --name some-docker -d docker:stable-dins
Note that the --privileged flag is required. Next, connect to that instance from a second container:
docker run --rm --link some-docker:docker docker:edge version
So it's easy to tell Docker which CPU a container can use:
docker run --cpuset=7 some_container_name
But this command can be run multiple times and all these processes share the same core. Is there a way to give a container exclusive access to a CPU and error out if anyone else tries to use it?
No, this isn't a feature of Docker. It would need to be done at a layer above Docker (like Kubernetes or ECS). But it would also be fairly easy to implement this yourself.