I conduct security testing on programs and applications. Often, when I want to run a tool, I need to add libraries and other dependencies onto the system that I'm testing in order for the tool to work properly. Also, the test environment doesn't have internet access, which makes installing these dependencies more difficult.
I was thinking about containerizing multiple tools so that I could put them onto the systems that I'm testing and they will have all of their dependencies. I am considering doing this in Docker, so anytime you see a reference to a container it implies that it's a Docker container.
Some of the tools I would like to use are nmap, strace, wireshark, and others for monitoring network traffic, processes and memory.
My questions are:
Can I run these tools "locally", or will there be networking required, as though they are coming from a different machine?
Is there anything required to put onto the test system for the container to run properly?
Docker allows you to connect to devices on the host itself using the --privileged flag.
Additionally you can add the --network=host flag which means your docker container network stack is no longer isolated from the host.
using the flags above should allow you to run your tools in docker without additional requirements on the host (other then a Docker runtime + your docker container).
However, it does mean you need to load your Docker container into the host. Normally downloading a docker container requires networking. But what you can do is tar a docker image with docker save and import that tar with docker import
Docker is, first and foremost, an isolation system that hides details of the host system from the container environment. All of the tools you describe need low-level access to the host network and process space: you actively do not want the isolation that Docker is providing here.
To give two specific examples: without special setup, you can’t tcpdump the host’s network interface, because a container has its own isolated network stack; and you can’t strace host processes because a container has its own process ID space and restricted Linux capabilities that disallow it.
(From a security point of view, also consider that anyone who can run any docker command can trivially root the host, and there is a reasonably common misconfiguration to publish this ability to the network. You’d prefer to not need to install Docker if you’re interested in securing an existing system.)
I’d probably put this set of tools on a USB drive (if that’s allowed in your environment) and run them directly off of there. A tar file of tools would work as well, if that’s easier to transfer. Either building them into static binaries or providing a chroot environment would make them independent of what’s installed in the host environment, but also wouldn’t block you from observing all of the things you’re trying to observe.
Related
What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)
I am trying to create the docker file (Image file) for the web application I am creating. Basically, the web application is written in Node.js and Vue.js. In order to create a docker container for the application, I have got the documentation from vue.js to create a docker file. The steps given are working file. I just wanted to clear my understanding in this part.
link:- https://cli.vuejs.org/guide/deployment.html#docker-nginx
If the necessary package Node/Python is installed in the OS (Not in the container), would the container be able to pick up the npm scripts and execute python scripts also? If yes, is it really dependent on the OS software packages as well?
Please help me with the understanding.
Yes, you need to install Node or Python or whatever software you need in your application in your container. The reason is that the container should be able to run on any host machine that has Docker installed, regardless of how the host machine is set up or what it software it has installed.
It might be a bit tedious at first to make sure that your Dockerfile installs all the software that is needed, but it becomes very useful when you want to run your container on another machine. Then all you have to do is type docker run and it should work!
Like David said above, Docker containers are isolated from your host machine and it should be treated as a completely different machine/host. The way containers can communicate with other containers or sometimes the host is through network ports.
One "exception" to the isolation between the container and the host is that the container can sometimes write to files in the host in order to persist data even after the container has been stopped. You can use volumes or mounts to allow containers to write to files on the host.
I would suggest the Docker Overview for more information about Docker.
I've read that you shouldn't ssh into a docker container. But why? I'd like to use a docker container as a replacement for a normal VM. What are the disadvantages? I know that this will create a lot of layers. But I could flatten my container on a regular base.
Can I use the container as a regular vm and what is the "worst case" that can happen?
Docker containers are optimized around running single processes. Virtual machines are optimized around running entire operating systems.
At a technical level you generally can run something that looks like a full VM inside a Docker container, but it's a lot of hand setup. For instance, a typical systemd setup wants to manage several host devices and kernel-level configuration options, and your choices to run systemd are either (a) let it manage the host and possibly conflict with the host's systemd, or (b) manually figure out which unit files you can't run and disable them. All of the prebuilt Docker images run only single services (just MySQL, just Nginx, just a Python runtime, ...) and so you're also giving up this ecosystem.
A VM certainly gives up some amount of efficiency by virtualizing hardware devices and running multiple OS kernels, but if you really want to run a VM, it's not a huge performance loss; just run a VM if that's the model you want to use.
No you can't use it as a replacement for a VM since you can only have one entrypoint on a docker container. You can not expose multiple services on multiple ports like you would on a regular virtual machine.
I built Docker image on server that can run CI-CD for Jenkins. Because some builds use Docker, I installed Docker inside my image, and in order to allow the inside Docker to run, I had to give it --privilege.
All works good, but I would like to run the docker in docker, on Openshift (or Kubernetes). The problem is with getting the --privilege permissions.
Is running privilege container on Openshift is dangerous, and if so why and how much?
A privileged container can reboot the host, replace the host's kernel, access arbitrary host devices (like the raw disk device), and reconfigure the host's network stack, among other things. I'd consider it extremely dangerous, and not really any safer than running a process as root on the host.
I'd suggest that using --privileged at all is probably a mistake. If you really need a process to administer the host, you should run it directly (as root) on the host and not inside an isolation layer that blocks the things it's trying to do. There are some limited escalated-privilege things that are useful, but if e.g. your container needs to mlock(2) you should --cap-add IPC_LOCK for the specific privilege you need, instead of opening up the whole world.
(My understanding is still that trying to run Docker inside Docker is generally considered a mistake and using the host's Docker daemon is preferable. Of course, this also gives unlimited control over the host...)
In short, the answer is no, it's not safe. Docker-in-Docker in particular is far from safe due to potential memory and file system corruption, and even mounting the host's docker socket is unsafe in effectively any environment as it effectively gives the build pipeline root privileges. This is why tools like Buildah and Kaniko were made, as well as build images like S2I.
Buildah in particular is Red Hat's own tool for building inside containers but as of now I believe they still can't run completely privilege-less.
Additionally, on Openshift 4, you cannot run Docker-in-Docker at all since the runtime was changed to CRI-O.
After reading the introduction of the phusion/baseimage I feel like creating containers from the Ubuntu image or any other official distro image and running a single application process inside the container is wrong.
The main reasons in short:
No proper init process (that handles zombie and orphaned processes)
No syslog service
Based on this facts, most of the official docker images available on docker hub seem to do things wrong. As an example, the MySQL image runs mysqld as the only process and does not provide any logging facilities other than messages written by mysqld to STDOUT and STDERR, accessible via docker logs.
Now the question arises which is the appropriate way to run an service inside docker container.
Is it wrong to run only a single application process inside a docker container and not provide basic Linux system services like syslog?
Does it depend on the type of service running inside the container?
Check this discussion for a good read on this issue. Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services?
That being said the state of logging in the docker service is famously broken. Even Solomon Hykes the creator of docker admits its a work in progress. In addition you normally need a little more flexibility for a real world deployment. I normally mount my logs onto the host system using volumes and have a log rotate daemon etc running in the host vm. Similarly I either install sshd or leave an interactive shell open in the the container so I can issue minor commands without relaunching, at least until I am really sure my containers are air-tight and no more debugging will be needed.
Edit:
With docker 1.3 and the exec command its no longer necessary to "leave an interactive shell open."
It depends on the type of service you are running.
Docker allows you to "build, ship, and run any app, anywhere" (from the website). That tells me that if an "app" consists of/requires multiple services/processes, then those should be ran in a single Docker container. It would be a pain for a user to have to download, then run multiple Docker images just to run one application.
As a side note, breaking up your application into multiple images is subject to configuration drift.
I can see why you would want to limit a docker container to one process. One reason being uptime. When creating a Docker provisioning system, it's essential to keep the uptime of a container to a minimum so that scaling sideways is fast. This means, that if I can get away with running a single process per Docker container, then I should go for it. But that's not always possible.
To answer your question directly. No, it's not wrong to run a single process in docker.
HTH