Docker has its own internal DNS system. However, in a web application I'm building which will frequently make rDNS calls, I've noticed that it does not cache results. This overhead makes it very easy to DoS the network and generally adds unwanted overhead.
Does Docker provide a built-in way to cache rDNS results or is this something I must build myself?
Related
I need to setup reverse proxy nginx in front of nodejs app that need to be deployed in google cloud run.
Use Cases
- Need to serve assets gzipped via nginx (I don't want to overhead node for gzip compression)
- To block small DDOS attacks
I didn't find any tutorial to setup nginx and node in cloud run.
Also I need to install PM2 to for node.
How to do this setup in docker? also how can I configure nginx before deploying?
Thanks in advance
I need to setup reverse proxy nginx in front of nodejs app that need
to be deployed in google cloud run.
Cloud Run already provides a reverse proxy - Cloud Run Proxy. This is the service that load balances, provides custom domains, authentication, etc. for Cloud Run. However, there is nothing in the design of Cloud Run to prevent you from using Nginx as a reverse proxy inside your container. There is nothing in the design of Cloud Run to prevent you from using Nginx as a separate container front-end to another Cloud Run service. Note in the last case you will be paying twice as much as you will need two Cloud Run services, one for the Nginx service URL and another for the node application.
Use Cases - Need to serve assets gzipped via nginx (I don't want to
overhead node for gzip compression) - To block small DDOS attacks
You can either perform compression in your node app or in Nginx. The result is the same. The performance impact is the same. Nginx does not provide any overhead savings. Nginx may be more convenient in some cases.
Your comment to block small DDOS attacks. Cloud Run autoscales, which means each Cloud Run instance will have some limited exposure to a DOS. As the DDOS traffic increases, Cloud Run will launch more instances of your container. Without a prior request from you, Cloud Run will stop scaling at 1,000 instances. Nginx will not provide any benefit that I can think of to mitigate a DDOS attack.
I didn't find any tutorial to setup nginx and node in cloud run.
I am not aware of a specific document covering Nginx and Cloud Run. However, you do not need one. Any document covering Nginx and Docker will be fine. If you want to run Nginx in the same container as your node application you will need to write a custom script to launch both Nginx and Node.
Also I need to install PM2 to for node.
Not possible. PM2 has a user interface and GUI. Cloud Run only exposes $PORT over HTTP from a Cloud Run instance.
How to do this setup in docker? also how can I configure nginx before
deploying?
There are numerous tutorials on the Internet for setting up Nginx and Docker.
Two examples below. There are hundreds of examples on the Internet.
How to run NGINX as a Docker container
Deploying NGINX and NGINX Plus on Docker
I have answered each of your questions. Now some advice:
Using Nginx with Cloud Run does not make any sense with a Node.js application. Just run your node application and let Cloud Run Proxy do its job.
Compression is CPU intensive. Cloud Run is designed for HTTP style microservices that are small, fast, and compact. You will pay for increased CPU time. If you have content that needs to be compressed, compress it first and serve the content compressed. There are cases where compression in Cloud Run is useful and/or correct, but look at your design and optimize where possible. Static content should be served by Cloud Storage, for example.
Cloud Run can handle a Node.js application easily with excellent performance and scalability provided that you follow its design criteria and purpose.
Key factors to keep in mind:
Low cost, you only pay for requests. Overlapping requests have the same cost as one request.
Stateless. Containers are shut down when not needed which means you must design for restarts. Store state elsewhere such as a database.
Only serves traffic on port $PORT, which today is 8080.
Public traffic can be either HTTP or HTTPS. Traffic from the Cloud Run Proxy to the container is HTTP.
Custom domain names. Cloud Run makes HTTPS for URLs very easy.
UPDATE: Only HTTPS is now supported for the public endpoint (Public Traffic).
I think you should consider using a different approach.
Running multiple processes in a single container is not a best practice. The more common implementation of a proxy as you describe is to use 2 containers (the proxy is often called the sidecar) but this is not possible with Cloud Run.
Google App Engine may be more suitable.
App Engine Flexible permits deployments of containers that are proxied (behind the scenes) by Nginx. You may use static content with Flexible and can incorporate a CDN. App Engine Standard addresses your needs too.
https://cloud.google.com/appengine/docs/flexible/nodejs/serving-static-files
https://cloud.google.com/appengine/docs/standard/nodejs/runtime
Like Cloud Run, App Engine is serverless but provides more flexibility and is a more established service. App Engine integrates with more (all?) GCP services too whereas Cloud Run is limited to a subset.
Alternatively, you may consider Kubernetes (Engine). This provides almost limitless flexibility but requires more ops. As you're likely aware, there's a Cloud Run implementation that runs atop Kubernetes, Istio and Knative.
Cloud Run is a compelling service but it is only appropriate if you can meet its (currently) contrained requirements.
I have good news for you. I have written a blog post about exactly what you needed with sample code.
This example puts NGINX in the front (port 8080 on Cloud Run) while proxying the traffic selectively to another service running in the same container (on port 8081).
Read the blog post: https://ahmet.im/blog/cloud-run-multiple-processes-easy-way/
Source code: https://github.com/ahmetb/multi-process-container-lazy-solution
Google Cloud Compute Systems
To understand GCP Computing, please see the below picture first:
For your case, I totally recommend you to use App Engine Flex to deploy your application. It supports docker container, nodejs,... To understand HOW TO DEPLOY nodejs to GAE Flex, please visit this page https://cloud.google.com/appengine/docs/flexible/nodejs/quickstart
You can install some third party libraries if you want. Moreover, GCP supports the global/internal load balancer, you can apply it into your GAE services.
I understand that if I use the host network driver for a container, that container’s network stack is not isolated from the Docker host.
I also believe understand conceptually that a good reasons to still do it might be when "Security is not an Issue or concern" and network throughput performance is important but I am struggling to think of a real world example of when I can or should do this. A naive example I can think of is a public facing load-balancer or static file web server.
I realize it may be possible to mitigate the security concerns outside of using host services like AWS or Google Cloud if hosted there but what if that wasn't an option!
When would or should an you use it in a production environment?
How can you mitigate the security concerns regardless of hosting environment?
How should you interact with other services in other docker networks?
I am struggling to think of a real world example of when I can or should do this. ... When would or should an you use it in a production environment?
Your application does not run on TCP or UDP, but another protocol
Your application requires a large range of incoming ports to be published (by default a docker-proxy process is spawned per published port, this can be excessive for a large range)
Your application works with multi-cast or broadcast network traffic
Your application needs to modify the networking layer of the host itself, e.g. a VPN
How can you mitigate the security concerns regardless of hosting environment?
You need to trust this application. You've removed a layer of docker namespacing and at that point, the container is a packaging format and likely fits in with the rest of your tooling, but doesn't require the same security approach you may have for other containers.
How should you interact with other services in other docker networks?
You would interact via published ports of the other containers, same as you would an application running outside of a container that needs to connect to an application inside of a container.
but I am struggling to think of a real world example of when I can or should do this.
Here is real world example: We use host network to speed up build stage of our gitlab ci/cd pipeline.
Container in question is up and running only during build phase, doesn't have any port exposed, needs faster network to download all the necessary pieces to build and push docker image and we experienced (in some intermittent occasions) issues with throughput and inconsistent behavior during build stage that we resolved with host network. Although with host network we "expose" ip of such a container, we still don't expose any ports and after build phase is finished container is discarded.
I know this doesn't answers all of your questions, but is requested real world example.
I'm not realy good at administrative tasks. I need couple of tomcat, LAMP, node.js servers behind ngnix. For me it seems really complicated to set everything on the system directly. I'm thinking about containerize the server. Install Docker and create ngnix container, node.js container etc.
I expecting it to be more easy to manage, only routing to the first ngnix maybe a little bit hassle. It will bring me also possibility to backup, add servers etc. easily. Not to forget about remote deployment and management. And also repeatability of the server setup task. Separation will probably shield me from recoprocating problem of completely breaking server, by changing some init script, screwing some app. server setup etc.
Are my expectation correct that Docker will abstract me little bit more from the "raw" system administration.
Side question is there anywhere some administrative GUI I can run and easily deploy, start/stop, interconnect the containers?
UPDATE
I found nice note here
By containerizing Nginx, we cut down on our sysadmin overhead. We will no longer need to manage Nginx through a package manager or build it from source. The Docker container allows us to simply replace the whole container when a new version of Nginx is released. We only need to maintain the Nginx configuration file and our content.
Yes docker will do this for you, but that does not mean, you will no longer administrate the OS for the services you run.
Its more that docker simplifies that management because you:
do not need to pick a specific OS for all of our services, which will enforce you to offside install a service because it has been not released for the OS of your choice. You would have the wrong version and so on. Instead, Docker will provide you the option, to pick the right OS or OS version ( debian wheezy, jessie or ubuntu 12.x, 14.x 16.x ) for the service in question. (Or even alpine)
Also, docker offers you pre-made images to avoid that you need remake the image for nginx, mysql, nodejs and so on. You find those on https://hub.docker.com
Docker makes it very easy and convenient to remove a service again, not littering your system by any means (over time).
Docker offers you better "mobility" you can easily move the stack or replicate it on a different host - you do not need to reconfigure the host and hope it to "be the same".
With Docker you do not need to think about the convergence of containers during their live time / or stack improvements, since they are remade from the image again and again - from the scratch, no convergence.
But, docker also (con)
Adds more complexity since you might run "more microservices". You might need a service-discovery, live configuration system and you need to understand the storage system ( volumes ) quiet a bit
Docker does not "remove" the OS-Layer, it just makes it simpler. Still you need to maintain
Volumes in general might feel not as simple as local file storage ( depends on what you choose )
GUI
I think the most compelling thing would match what you define a "GUI" is, is rancher http://rancher.com/ - its more then a GUI, its the complete docker-server management stack. High learning curve first, a lot of gain afterwards
You will still need to manage the docker host OS. Operations like:
Adding Disks from time to time.
Security Updates
Rotating Logs
Managing Firewall
Monitoring via SNMP/etc
NTP
Backups
...
Docker Advantages:
Rapid application deployment
Portability across machines
Version control and component reuse
Lightweight footprint and minimal overhead
Simplified maintenance
...
Docker Disadvantages:
Adds Complexity (Design, Implementation, Administration)
GUI tools available, some of them are:
Kitematic -> windows/mac
Panamax
Lorry.io
docker ui
...
Recommendation: Start Learning Docker CLI as the GUI tools don't have all the nifty CLI features.
I am having this issue for a while I am not sure how to fix it. I have a Docker container running PHP+Apache with an application. The MySQL and MongoDB servers are on the same network as the host. So:
MySQL DB Server IP: 192.168.1.98
Mongo DB Server IP: 192.168.1.98
Host: 192.168.1.90
For some reason the connectivity between the application running on the container and the DB server is pretty slow and sometimes it takes more than one minute running long queries.
I can say the problem is not the DB server because running the same application on the same server works fast so I think is something related to networking but I am not sure what or why.
Can any give me some advice around this?
You have not given much information, but based on what you have described
The simplest reason could be that the amount of data that is being transferred across the network is high. Even though the hosts are on the same network, the time taken to transfer a large file across a pair of machines on the network would be considerably slower than copying it from the same host.
Since it seems like you are running both MongoDB and MySQL DB on the same host, they could be easily interfering with the execution of each other. While the container provides isolation among them at the operating system level, the hardware does not identify containers. When both containers try to use the disk, the performance can degrade.
I have personally run into both these issues at different times and while they seem simple they can have significant impact on the performance. It would be nice if you could provide some additional information to help better understand your problem.
There are many use-cases found for docker, and they all have something to do with portability, testing, availability, ... which are especially useful for large enterprise applications.
Considering a single Linux server in the internet, that acts as mail- web- and application server - mostly for private use. No cluster, no need to migrate services, no similar services, that could be created from the same image.
Is it useful to consider wrapping each of the provided services in a Docker container, instead of just running them directly on the server (in a chroot environment) when considering the security of the whole server, or would that be using a sledgehammer to crack a nut?
As far as I would understand, the security would really be increased, as the services would be really isolated, and even gaining root privileges wouldn't allow to escape the chroot, but the maintenance requirements would increase, as I would need to maintain several independent operations system (security updates, log analysis, ...).
What would you propose, and what experiences have you made with Docker in small environments?
From my point of security is, or will be, one of the strengths of linux containers and Docker. But there is a long way to get a secure environment and completely isolated inside a container. Docker and some other big collaborators like RedHat have shown a lot of efforts and interest in securing containers, and any public security flag (about isolation) in Docker has been fixed. Today Docker is not a replacement in terms of isolation to hardware virtualization, but there are projects working in Hypervisors running container that will help in this area. This issue is more related to companies offering IAAS or PAAS where they use virtualization to isolate each client.
In my opinion for a case as you propose, running each service inside a Docker container provides one more layer in your security scheme. If one of the service is compromised there will be one extra lock to gain access to all your server and the rest of services. Maybe the maintenance of the services increases a little, but if you organize your Dockerfiles to use a common Docker image as base, and you (or somebody else) update that base image regularly, you don't need to update all the Docker container one by one. And also if you use a base image that is update regularly (i.e.: Ubuntu, CentOS) the security issues that affect those images will be updated fixed rapidly and you'd only have to rebuild and relaunch your containers to update them. Maybe is an extra work but if security is a priority, Docker may be an added value.