Benefits of deploying using docker-compose [closed] - docker

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If I am wrong at anything below, correct me pls.
Let's say I have simple frontend application (React, Vue, Angular, whatever) and then backend (Node.js or any RestAPI provider).
I am able to run both of them separately (not using docker), or both dockerized using docker-compose.
Approach 1: When not using docker, I need to deploy my application to 2 separate servers.
Approach 2: When using docker-compose, this allows me to deploy everything to a single server (like heroku). Frontend would be under default http port 80, backend would be for example under port 81.
I can already see that a huge benefit of approach 2) is that I don't need to pay for 2 hostings.
My questions are:
What is the speed comparison for both approaches of requests going from frontend to backend (I mean it for Server side rendering like Nuxt.js or Next.js). Is approach 2 going to be faster because it is on the same server?
What are the other pros and cons which I am missing?
Thank you

There's no reason why you can't deploy your frontend and backend to the same server whether or not you use Docker and Docker Compose.
Using containers provides a mechanism by which you can package applications and publish these to registries and deploy them to other machines with a high degree of confidence that they will run on the other machines without change, now and in the future.
Without Docker, you need to provide scripts or some other form of deployment that can install the apps on other machines and, unless these other machines are perfect clones of your development host, you'll likely need to install other OS and software dependencies too.
So, Docker facilitates distribution and packaging of apps and it helps ensure multiple apps run on a single host without (unexpected) interactions between applications.
If you use containers but not Docker Compose, you need to provide scripts (or similar) that describes how your app (components: frontend|backend) interact and combine: which ports they use, their env environment (variables) etc. Docker Compose facilitates this and provides a convenient mechanism to describe how to combine the pieces into a coherent whole.
There are fully functional alternatives to Docker and Docker Compose but Docker and Docker Compose are widely used and you can assume that platform providers and prospective users of your app, are willing|able to use both of them.
Neither Docker nor Docker Compose add a performance overhead to your app. Both are primarily programmers of the control plane (so-called east-west) rather than your app's data plane (so-called north-south).
Running your app on a single server will almost always be more performant than running on multiple servers -- regardless of whether or not you use containers -- because you can avoid network latency which is a significant contribution to performance.

Related

Pull image vs. Git pull [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I am currently working on a production release pipeline and am asking myself why are people even using container registries to build and push their images when they could also pull the whole repository and run compose up?
Don't get me wrong, I know that docker is perfectly for setting up equal environments and I am using it too in production and development. I just would like to know if there are any benefits in choosing to pull released images instead?
From my perspective I am setting up every dependent service within docker-compose for my app, which I would not have access to anymore if my release pipeline would pull the production image instead. On the other side when I choose to pull the repo I just run docker-compose up from my production folder and all dependencies are installed - including the dockerized application via Dockerfile.
There many reasons, lets pick up some of them:
Docker images are not just code
A docker-images contains all the stuff, which is necessary for an application. That can be a specific version of java, php or other dependencies and binaries (ping is a good example).
Docker images are prebuilt
In a git-repository is only code. That means: There are no dependencies in there. If you want to run that code in production, then the production-server must download all dependencies - which can be a lot, especially with npm - and then it have to build it. The building-process can take a long time and needs a lot of resources (CPU-time, memory, io, ...). Resources that are not useable for your users while the server is building.
Docker containers are isolated
What happens if you want to run different applications on the same server? Spring Boot applications a running per default on port 8080. A port is an exclusive resource, which can be only used by one process.
Docker images are versioned
You can define versions for images like node:16. Yes, you can get that in git with tags, but versions are a lot easier.
It is not only docker
There is a change at servers. Bare metal servers are dying, today we're using clusters. Clusters which can be autoscaled out-of-the-box. A very short startup-time for applications is necessary. And that is not possible with git.
many more

Docker based Web Hosting

I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.

Which platform is mostly used in production servers to develop and deploy "application services" [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I need to decouple my single monolithic application to "micro services" and each module is a combination of (application server + database).
Wondering out of this
Vagrant
OpenVZ
Docker (not preferred choice as it doest not support data persistence)
which one is used in production servers ?
TL;DR: Docker and RKT are enterprise choices, Docker has much wider community, attention and impelmentation.
First of all, Docker supports data persistence. You can easily do this via volumes, and lots of drivers available for different storage backends.
Docker Philosophy: Microservices.
Google started using containers in 2000s, Lots of enterprises use containers under heavy load today. Docker is one of the best implementations there is. So Definitely docker depending on your needs and environment.
Vagrant is for development environments. You can even use docker inside, or no vagrant local docker.
OpenVZ has focussed on setting up VPS containers that you decorate yourself manually. Therefore they provide templates for empty linux machines that you can start up and shut down, that you afterward SSH into, to set them up yourself with whatever you need, like a LAMP stack.
OpenVZ vs Docker: OpenVZ sees a container as a VPS, and docker sees a container as an application/service. So definitely docker for microservices.
RKT, functionally docker is all similar to rkt; however, along with "Docker Images", rkt can also download and run "App Container Images" (ACIs). Besides also supporting ACIs, rkt has a substantially different architecture that is designed with composability and security in mind.
rkt has no centralized "init" daemon, instead launching containers directly from client commands, making it compatible with init systems such as systemd, upstart, and others.
rkt uses standard Unix group permissions to allow privilege separation between different operations. Once the rkt data directory is correctly set up, container image downloads and signature verification can run as a non-privileged user.

What components should be "containerized" - Docker

I am exploring use of containers in a new application and have looked at a fair amount of content and created a sandbox environment to explore docker and containers. My struggle is more an understanding what components needs to be containerized individually vs bundling multiple components into my own container. And what points to consider when architecting this?
Example:
I am building a python back end service to be executed via webservice call.
The service would interact with both Mongo DB, and RabbitMQ.
My questions are:
Should I run individual OS container (EG Ubuntu), Python Container, MongoDB Container, Rabbit MQ container etc? Combined they all form part of my application and by decoupling everything I have the ability to scale individually?
How would I be able to bundle/link these for deployment without losing the benefits of decoupling/decomposing into individual containers
Is an OS and python container actually required as this will all be running on an OS with python anyways?
Would love to see how people have approached this problem?
Docker's philosophy: using microservices in containers. The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services.
Some advantages of microservices architecture are:
Easier upgrade management
Eliminates long-term commitment to a single technology stack
Improved fault isolation
Makes it easier for a new developer to understand the functionality of a service
Improved Security
...
Should I run individual OS container (EG Ubuntu), Python Container,
MongoDB Container, Rabbit MQ container etc? Combined they all form
part of my application and by decoupling everything I have the ability
to scale individually?
You dont need an individual OS ontainer. Each container will use Docker host's kernel, and will contain only binaries required, python binaries for example.
So you will have, a python container for you python service, MongoDB container and RabbitMQ container.
How would I be able to bundle/link these for deployment without losing
the benefits of decoupling/decomposing into individual containers?
For deployments, You will use dockerfiles + docker-compose file. Dockerfiles include instructions to create a docker image. If you are just using official library images, you don't need dockerfiles.
docker-compose will help you orchestrate the container builds (from docker files), start ups, Creating required networks, Mounting required volumes and etc.

Using docker, puppet and jenkins for continuous delivery and PROD deployment [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Need to setup infrastructure for a new project. Previously i have used puppet standalone with jenkins, but now i'm thinking about incorporating docker builds, so that i could push from dev to stage'ing to production without triggering a build, but by simply fetching docker existing docker images that have been already built.
The app:
Java web app with rest api backed by postgresql, neo4j, elasticsearch
Client side app written with angular that talks to java through rest api
Code stored in git repositories
Envs:
Dev server (building, dev + test environments) - 32GB linux machine
Test server (AWS)
Production (AWS)
Setup:
So basically i was thinking something like this:
Separate Docker images for java + cient side app, postgresql, elasticsearch, neo4j that talk to each other and have their data stored on hosts through Docker volumes, or by using Docker data containers (have not decided on the approach yet)
Jenkins building all the code and creating Docker images that would be pushed to private internal repository
Integration tests run with Puppet docker module on DEV server
Push to production with jenkins via puppet by using Docker
Why should i use docker?
Big dev machine - could easily run multiple instaces of my app without the need of virtualization (could have an unstable dev, stable dev, sit, etc.)
Ease of deployment (use docker and puppet docker module) and rollback (simply retrieve the previous version from Docker repository)
Quick migration and ability to spawn new instances
Preparation for easy scaling of different parts of the system (eg. clustering elasticsearch)
Questions
Does this look reasonable?
I'm thinking about using this puppet module https://github.com/garethr/garethr-docker. How would update my environments via it? I must somehow stop the docker container, do an docker rm, and then docker run ?
We're using liquibase for database update management. Guess this should go separetly from docker for updates/rollbacks?
Any suggestions welcome, thank you.
You're building a container orchestrated PAAS. My advice to look at similar systems for best practices that might be worthwhile emulating.
First place to start is the 12 factor app site, written by one of the cofounders of Heroku. The site is incredible useful, describing some of the desirable operational features of a modern cloud scale application. Next stop would be Heroku itself to obtain an idea of what a "modern" development and deploy environment could/can look like.
I'd also recommend looking at some of the emerging open source PAAS platforms. Large vendor supported systems like Cloud Foundry and Openshift are all the rage at the moment, but simpler solutions (built on docker) are also emerging. One of these, Deis,uses a related technology Chef, so might give some insight in how puppet could be used to manage your runtime docker containers. (Modern Deis no longer uses Chef)
Answers:
Yes this is quite reasonable.
Instead of managing "environments", do like Heroku does and just create a new application for each version of your application. This is the "Build, Release, Run" pattern. In your case Jenkins is triggered by the new code, creates the Docker images, which can be saved into a repository and used to deploy instances of your application release.
Database would be an example of a "backing service" which you can connect to your application at application creation time. An upgrade would amount to stopping one application version and starting another both connected to the same database.

Resources