Running Vulnerable Web Apps in Docker - docker

I would like to assess multiple Security Testing Tools (like OWASP ZAP) by running them against multiple vulnerable web applications (like Damn Vulnerable Web Application - DVWA).
I know running vulnerable web applications while being in a network connected to the web might be dangerous and not the best idea. But is it also unsafe to run these apps inside docker containers?
Would that be a good option without having to consider getting hacked or should i disconnect from the web while running these apps inside docker?
Unfortunately, i dont know much about how abstracted a docker image is from the rest of my pc and what possibilities a pwned server inside a docker container would have. Thank you

Related

Docker based Web Hosting

I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.

How should I host a .Net Core Console application that needs to run 24/7?

My application is written in .Net Core as a Console App. It consumes a RabbitMQ Queue and it listens on SignalR sockets, calls 3rd party APIs and publishes to RabbitMQ Queues. It needs to run 24/7.
This is all working great on my local environment, but now I am ready to deploy to a web server, I am trying to work out how best to host this application. I am leaning towards deploying into a Docker container, but I am unsure if this is advisable for a 24/7 application.
Are containers designed for short lived workers only, and will they be costly to leave running all the time?
Can I put my container on my Web Server alongside my Web APIs etc. and host on the same Windows EC2 box maybe to save hosting costs?
How would others approach the deployment of this .Net Core application onto a web hosting environment?
Does you application maintain any state ? You can have a long live application but you’ll want to handle state if you maintain it . Might be able to use a compose file to handle everything like volumes , networking, and restart policies .

What Rebus transport should I use for running in Docker, but not Azure

I try to set up a Docker container for an ASP.NET Core/net47 application that uses Rebus with MSMQ. But it doesn't seem to be possible to use MSMQ with Docker.
The application is not hosted on Azure - and it does have to communicate with external services via the bus, and they do not share a database - so it can't be SQL Server transport either, I guess.
So I wonder... what I my options with Rebus?
Well.... I guess your best option at the moment is to pull down a Docker image with RabbitMQ in it.
But since I'm not a Docker kind of guy, I cannot give you any further instructions besides that.
Since this question reaches beyond Rebus and far into Docker territory, it would be awesome if someone well-versed in Docker stuff would chime in.

Proprietary Docker Containers

I'm looking for a way to distribute my applications in docker containers/stack. These will go to clients who should be able to start and stop the containers; however, I would prefer they not reverse engineer the content within the containers or run containers on a host other than that which it is shipped on. What's the most effective method of distributing containers to customers?
So far as I can tell, securing the host and having the application follow traditional licensing methods is about as close as I'm going to get and that docker may not provide any added benefits.

Is it useful to run publicly-reachable applications as Docker containers just for the sake of security?

There are many use-cases found for docker, and they all have something to do with portability, testing, availability, ... which are especially useful for large enterprise applications.
Considering a single Linux server in the internet, that acts as mail- web- and application server - mostly for private use. No cluster, no need to migrate services, no similar services, that could be created from the same image.
Is it useful to consider wrapping each of the provided services in a Docker container, instead of just running them directly on the server (in a chroot environment) when considering the security of the whole server, or would that be using a sledgehammer to crack a nut?
As far as I would understand, the security would really be increased, as the services would be really isolated, and even gaining root privileges wouldn't allow to escape the chroot, but the maintenance requirements would increase, as I would need to maintain several independent operations system (security updates, log analysis, ...).
What would you propose, and what experiences have you made with Docker in small environments?
From my point of security is, or will be, one of the strengths of linux containers and Docker. But there is a long way to get a secure environment and completely isolated inside a container. Docker and some other big collaborators like RedHat have shown a lot of efforts and interest in securing containers, and any public security flag (about isolation) in Docker has been fixed. Today Docker is not a replacement in terms of isolation to hardware virtualization, but there are projects working in Hypervisors running container that will help in this area. This issue is more related to companies offering IAAS or PAAS where they use virtualization to isolate each client.
In my opinion for a case as you propose, running each service inside a Docker container provides one more layer in your security scheme. If one of the service is compromised there will be one extra lock to gain access to all your server and the rest of services. Maybe the maintenance of the services increases a little, but if you organize your Dockerfiles to use a common Docker image as base, and you (or somebody else) update that base image regularly, you don't need to update all the Docker container one by one. And also if you use a base image that is update regularly (i.e.: Ubuntu, CentOS) the security issues that affect those images will be updated fixed rapidly and you'd only have to rebuild and relaunch your containers to update them. Maybe is an extra work but if security is a priority, Docker may be an added value.

Resources