I have a seccomp profile for my Golang app (generated with go2seccomp) to use with Docker but would like not to have to use it on the command line with --security-opt. Is there a way to "integrate" the profile while building the image? One reason is to avoid having an extra file and be able to docker run it directly.
Maybe I'm not understanding how Docker and seccomp work together, or maybe this is just not an available feature.
Related
I'm starting to dig into AppArmor and since nearly all my services run in a docker container I would like to create profiles for these containers, as mentioned in the docker docs.
Has anybody experience with this, so can I somehow use aa-genprof with a docker container, to semi-automate the process?
Greetings
mathas
I think this will do what you're asking. I don't think it's as seamless as aa-genprof but hopefully will speed up the process of creating profiles.
https://github.com/genuinetools/bane
I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
I'm using a platform (Cytomine) on Ubuntu 18.04 to run some deep learning containerized applications (this platform handles the Docker images and containers automatically, so I only need to create the image and provide its download URL to the platform). So far it's working good but now I need to enable GPU support to run the model efficiently. Thus, I did some local tests with nvidia-docker to manually run the model container with GPU support, it was really easy to have it working because I just had to add one option to the run command:
docker run --gpus all
However, because I cannot add this option to the code on the Cytomine platform I need to find a way of adding/enabling that option by default to all the containers run by docker.
I tried adding this option to the files /etc/docker/daemon.json and /etc/docker/key.json and then restarted docker sudo systemctl restart docker. However, it didn't work.
Also, I found how to create docker config files (docker config); however, this seems to work only with Docker Swarm and I'm not going to use a Swarm for this project.
Thus, I'm looking for a straightforward solution that can be deployed properly. Is there any way to enable this option (--gpus all) by default when running any Docker container? (like somehow including it on the Dockerfile?)
Thanks!
How can I restrict any system call made inside a docker container. If the given process makes a system call it will be blocked. Or how can I use seccomp with docker.
You can see more at "Seccomp security profiles for Docker" (the eature is available only if the kernel is configured with CONFIG_SECCOMP enabled.)
The supoprt for docker containers will be in docker 1.10: see issue 17142
allowing the Engine to accept a seccomp profile at container run time.
In the future, we might want to ship builtin profiles, or bake profiles in the images.
PR 17989 has been merged.
It allows for passing a seccomp profile in the form of:
{
"defaultAction": "SCMP_ACT_ALLOW",
"syscalls": [
{
"name": "getcwd",
"action": "SCMP_ACT_ERRNO"
}
]
}
Example (based on Linux-specific Runtime Configuration - seccomp):
$ docker run --rm -it --security-ops seccomp:/path/to/container-profile.json jess/i-am-malicious
I want to use OpenStack Heat to create an application which consists of several Docker containers, and monitor some metrics of these containers, like: CPU/Mem utilization, and other application-specific metrics.
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile, and then run a Docker container based on the image which has cloud-init and heat-cfntools running in it?
Thanks!
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile
It is possible to use cloud-init inside a Docker container, if you (a) have an image with cloud-init installed, (b) have the correct commands configured in your ENTRYPOINT or CMD script, and (c) your container is running in an environment with an available metadata service.
Of those requirements, (c) is probably the most problematic; unless you are booting containers using the nova-docker driver, is is unlikely that your containers will have access to the Nova metadata service.
I am not particularly familiar with heat-cfntools, although a quick glance at the code suggests that it may work without cloud-init by authenticating against the Heat CFN API using ec2-style credentials, which you would presumably need to provide via environment variables or something.
That said, it is typically much less necessary to run cloud-init inside Docker containers, the theory being that if you need to customize an image, you'll use a Dockerfile to build a new one based on that image and re-deploy, or specify any necessary additional configuration via environment variables.
If your tools require monitoring processes on the host, you'll probably want to run with
docker run --pid=host
This is a feature introduced in Docker Engine version 1.5.
See http://docs.docker.com/reference/run/#pid-settings