Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I am currently working on a production release pipeline and am asking myself why are people even using container registries to build and push their images when they could also pull the whole repository and run compose up?
Don't get me wrong, I know that docker is perfectly for setting up equal environments and I am using it too in production and development. I just would like to know if there are any benefits in choosing to pull released images instead?
From my perspective I am setting up every dependent service within docker-compose for my app, which I would not have access to anymore if my release pipeline would pull the production image instead. On the other side when I choose to pull the repo I just run docker-compose up from my production folder and all dependencies are installed - including the dockerized application via Dockerfile.
There many reasons, lets pick up some of them:
Docker images are not just code
A docker-images contains all the stuff, which is necessary for an application. That can be a specific version of java, php or other dependencies and binaries (ping is a good example).
Docker images are prebuilt
In a git-repository is only code. That means: There are no dependencies in there. If you want to run that code in production, then the production-server must download all dependencies - which can be a lot, especially with npm - and then it have to build it. The building-process can take a long time and needs a lot of resources (CPU-time, memory, io, ...). Resources that are not useable for your users while the server is building.
Docker containers are isolated
What happens if you want to run different applications on the same server? Spring Boot applications a running per default on port 8080. A port is an exclusive resource, which can be only used by one process.
Docker images are versioned
You can define versions for images like node:16. Yes, you can get that in git with tags, but versions are a lot easier.
It is not only docker
There is a change at servers. Bare metal servers are dying, today we're using clusters. Clusters which can be autoscaled out-of-the-box. A very short startup-time for applications is necessary. And that is not possible with git.
many more
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If I am wrong at anything below, correct me pls.
Let's say I have simple frontend application (React, Vue, Angular, whatever) and then backend (Node.js or any RestAPI provider).
I am able to run both of them separately (not using docker), or both dockerized using docker-compose.
Approach 1: When not using docker, I need to deploy my application to 2 separate servers.
Approach 2: When using docker-compose, this allows me to deploy everything to a single server (like heroku). Frontend would be under default http port 80, backend would be for example under port 81.
I can already see that a huge benefit of approach 2) is that I don't need to pay for 2 hostings.
My questions are:
What is the speed comparison for both approaches of requests going from frontend to backend (I mean it for Server side rendering like Nuxt.js or Next.js). Is approach 2 going to be faster because it is on the same server?
What are the other pros and cons which I am missing?
Thank you
There's no reason why you can't deploy your frontend and backend to the same server whether or not you use Docker and Docker Compose.
Using containers provides a mechanism by which you can package applications and publish these to registries and deploy them to other machines with a high degree of confidence that they will run on the other machines without change, now and in the future.
Without Docker, you need to provide scripts or some other form of deployment that can install the apps on other machines and, unless these other machines are perfect clones of your development host, you'll likely need to install other OS and software dependencies too.
So, Docker facilitates distribution and packaging of apps and it helps ensure multiple apps run on a single host without (unexpected) interactions between applications.
If you use containers but not Docker Compose, you need to provide scripts (or similar) that describes how your app (components: frontend|backend) interact and combine: which ports they use, their env environment (variables) etc. Docker Compose facilitates this and provides a convenient mechanism to describe how to combine the pieces into a coherent whole.
There are fully functional alternatives to Docker and Docker Compose but Docker and Docker Compose are widely used and you can assume that platform providers and prospective users of your app, are willing|able to use both of them.
Neither Docker nor Docker Compose add a performance overhead to your app. Both are primarily programmers of the control plane (so-called east-west) rather than your app's data plane (so-called north-south).
Running your app on a single server will almost always be more performant than running on multiple servers -- regardless of whether or not you use containers -- because you can avoid network latency which is a significant contribution to performance.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I need to set up an environment with docker, containing multiple technologies such as a BD, a test environment, continuous integration, and some other things. I also need it to be available for my teammates to use.
I'm don't quite understand docker beyond the high-level concept of it so I have no idea where to start from. Useful answers would go from a step-by-step how to do it, to simply pointing me towards the right links for my problem. Thank you!
We intend to use either:
PostgreSQL
Node.js
Vue
Jenkins
, or:
PostgreSQL
Android Studio
Jenkins
To answer your first question about sharing a dev docker with teammates you need to have 2 different docker-compose files in your project like dev and prod.
on the other hand when your not yet comfortable with docker you better get involve with it step by step.
learn about making an stateless application, because when you are working with docker you want to scale horizontally later on
dockerize your apps (learn how to make a docker file for your nodejs project)
learn how to make a docker-compose file for nodejs + postgres application test it and make sure they are connected and are in one docker network which you created in docker-compose
you need a docker repository like docker hub or your own repo installation like nexus to push your production ready code after jenkins automated tests which then you can deploy
you can put your front and back end in one docker-compose file but i wouldn't recommend it because, multiple teams should work with one docker-compose file which might at first confuses them
you can ask your devOps team for Jenkins installation and create your CI yaml files
docker-compose files that you would create would be in the project directory and any one who clones the project would have access to it
create a read-me file with clear instructions for building, testing and deployment of the project for both dev and prod environment.
i don't know this would help or not because your question was not specific but i hope it will.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I need to decouple my single monolithic application to "micro services" and each module is a combination of (application server + database).
Wondering out of this
Vagrant
OpenVZ
Docker (not preferred choice as it doest not support data persistence)
which one is used in production servers ?
TL;DR: Docker and RKT are enterprise choices, Docker has much wider community, attention and impelmentation.
First of all, Docker supports data persistence. You can easily do this via volumes, and lots of drivers available for different storage backends.
Docker Philosophy: Microservices.
Google started using containers in 2000s, Lots of enterprises use containers under heavy load today. Docker is one of the best implementations there is. So Definitely docker depending on your needs and environment.
Vagrant is for development environments. You can even use docker inside, or no vagrant local docker.
OpenVZ has focussed on setting up VPS containers that you decorate yourself manually. Therefore they provide templates for empty linux machines that you can start up and shut down, that you afterward SSH into, to set them up yourself with whatever you need, like a LAMP stack.
OpenVZ vs Docker: OpenVZ sees a container as a VPS, and docker sees a container as an application/service. So definitely docker for microservices.
RKT, functionally docker is all similar to rkt; however, along with "Docker Images", rkt can also download and run "App Container Images" (ACIs). Besides also supporting ACIs, rkt has a substantially different architecture that is designed with composability and security in mind.
rkt has no centralized "init" daemon, instead launching containers directly from client commands, making it compatible with init systems such as systemd, upstart, and others.
rkt uses standard Unix group permissions to allow privilege separation between different operations. Once the rkt data directory is correctly set up, container image downloads and signature verification can run as a non-privileged user.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm building an application that people will run on their own server, similar to moodleor wordpress. I'm assuming the people running the application will be familiar with executing commands on the command line, but I can't assume they are familiar with Docker.
What I'm thinking of doing, is giving them instructions on how to install Docker and docker-compose. Most installations will be small enough that both the web server and database can run on the same machine, so they can just they can just put the compose file in a directory and then run docker-compose up -d.
Would this be a good way to distribute the application? Ofcourse, the docker-compose file would take into account all the considerations for running docker-compose in production.
You have two tasks:
1. Install Docker on server
You can use something like Ansible or just make a good manual page for them.
2. Run containers, build application, etc.
It is very easy to create Makefile with basic command:
make install
make reinstall
make build
make start
make stop
If you will use Ansible for 1. you can use it for 1. and for 2. both.
If you don't need to automise 1. it is enough to use Makefile. It is simple and fast. And they can understand what does this Makefile do.
I think, Why not? If your final user is Ok about using Docker, I think that's a cool way to do.
It let your final user get rid of versions and hardware differences, as you need, and you are able to push new versions of your containers, so that you can do updates easily.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Need to setup infrastructure for a new project. Previously i have used puppet standalone with jenkins, but now i'm thinking about incorporating docker builds, so that i could push from dev to stage'ing to production without triggering a build, but by simply fetching docker existing docker images that have been already built.
The app:
Java web app with rest api backed by postgresql, neo4j, elasticsearch
Client side app written with angular that talks to java through rest api
Code stored in git repositories
Envs:
Dev server (building, dev + test environments) - 32GB linux machine
Test server (AWS)
Production (AWS)
Setup:
So basically i was thinking something like this:
Separate Docker images for java + cient side app, postgresql, elasticsearch, neo4j that talk to each other and have their data stored on hosts through Docker volumes, or by using Docker data containers (have not decided on the approach yet)
Jenkins building all the code and creating Docker images that would be pushed to private internal repository
Integration tests run with Puppet docker module on DEV server
Push to production with jenkins via puppet by using Docker
Why should i use docker?
Big dev machine - could easily run multiple instaces of my app without the need of virtualization (could have an unstable dev, stable dev, sit, etc.)
Ease of deployment (use docker and puppet docker module) and rollback (simply retrieve the previous version from Docker repository)
Quick migration and ability to spawn new instances
Preparation for easy scaling of different parts of the system (eg. clustering elasticsearch)
Questions
Does this look reasonable?
I'm thinking about using this puppet module https://github.com/garethr/garethr-docker. How would update my environments via it? I must somehow stop the docker container, do an docker rm, and then docker run ?
We're using liquibase for database update management. Guess this should go separetly from docker for updates/rollbacks?
Any suggestions welcome, thank you.
You're building a container orchestrated PAAS. My advice to look at similar systems for best practices that might be worthwhile emulating.
First place to start is the 12 factor app site, written by one of the cofounders of Heroku. The site is incredible useful, describing some of the desirable operational features of a modern cloud scale application. Next stop would be Heroku itself to obtain an idea of what a "modern" development and deploy environment could/can look like.
I'd also recommend looking at some of the emerging open source PAAS platforms. Large vendor supported systems like Cloud Foundry and Openshift are all the rage at the moment, but simpler solutions (built on docker) are also emerging. One of these, Deis,uses a related technology Chef, so might give some insight in how puppet could be used to manage your runtime docker containers. (Modern Deis no longer uses Chef)
Answers:
Yes this is quite reasonable.
Instead of managing "environments", do like Heroku does and just create a new application for each version of your application. This is the "Build, Release, Run" pattern. In your case Jenkins is triggered by the new code, creates the Docker images, which can be saved into a repository and used to deploy instances of your application release.
Database would be an example of a "backing service" which you can connect to your application at application creation time. An upgrade would amount to stopping one application version and starting another both connected to the same database.