Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I was going through Puppet and comparing it with Docker.
I came to know that Puppet is used for configuration management for scalable infrastructure. New VM's setup can be done with same configuration easily etc.
Seems that Docker is also capable of all these though in a different way.
Is docker replacing the configuration management tools like puppet, chef etc?
Please help me to understand.
Thanks in advance.
Unsure of if this question belongs here or not, but never the less, here is some source material that probable explains it better than me: http://cloudify.co/2014/10/30/Docker-cloud-orchestration-configuration-management.html
Docker operates in a different manner than Chef or Puppet. Docker is (with limited exceptions) a static system. Chef et. al. are dynamic in nature. If you seek to change a fleet of Docker provisioned services you would create a new Docker container, push it out and blow away your old ones.
Chef et. al. instead check frequently for state changes and when they occur they pull those changes down and converge. This leaves room for having parts of the server automated and some not (if its a difficult to manage portion, for instance, or for emergency repairs).
Of the two Docker is the stronger model in my opinion but even then you should have some well defined CM to create your docker images, such as serverless Chef, Ansible or other.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed yesterday.
Improve this question
One of web tools we intend to use requires docker for installation. Due to limitation in resources, however, the only way for us to deploy this tool is on a shared university php webserver with an associated MySQL database. My question is, can you somehow convert or even "compile" this docker-dependent tool to get some simple package, similar, for instance, to Wordpress? Indeed, as per my understanding, Wordpress development does require docker, while the final package for Wordpress installation does not.
Is this operation of docker-removal possible and is there a standardised workflow? The tool in question is located in the following repository.
I have tried to install the tool as is, being blocked by the lack of admin privileges and the absence of docker on the described university webserver. I have experience in setting up Wordpress, I would expect for my tool of interest to have a more sophisticated installation process (compared to the current 3 steps) without docker and, for instance, to also require manual connection to an SQL-database.
Please excuse me for my limited understanding and layman terms, I am sadly not coming from a computer science background.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
What is advantage of using docker in local machine for running app?
And difference without using do docker.
Reproducibility. No more "works on my machine".
Furthermore, we can deploy all our dependencies (relational database, document-based database, graph-database, messaging-system, ....) through docker (e.g. through a docker-compose file and thus eases development.
Another advantage is that - in case we deploy to a container-based environment - we can use the exact same images used in production and thus improve dev-prod-parity.
There are a lot of advantages:
You can easily install few versions of different software without any collisions (e.g. 10 versions of MongoDB).
As previous commentator said - it creates isolated environment similar to your production (the only difference is the actual number of resources, such as CPU/GPU/RAM/etc.).
Easy setup for new developers (no need to manually install each separate tool and resolve issues with installations/configuration/etc.).
Ability to quickly deploy test environments, or new servers, or deploy this app on your brand new laptop)).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I've been asked by my client to start with docker based performance and load testing.
Also they have have multiple nodes on docker for the application.
They are expecting me to run a load test on the dockerized application and share the results.
By the way I'm totally unaware from where to start for this.
I've searched on Blaze meter community (https://www.blazemeter.com/blog/performance-testing-with-docker) about this as well but looking for some guidance to start with this docker load testing.
What presently I have is the docker :
I also just wanted to know the suggestions what all parameters should we need to test when it comes to docker for performance testing.
You don't need Docker (or other virtualization solution) for load testing, containers don't add any value, they just consume resources.
Moreover, JMeter doesn't know anything about the architecture of the system under test, whether it's dockerized, microservice, monolith, written in this or that programming language, etc. JMeter acts on the protocol level sending the request to the application, waiting for the response and measuring response time.
So you don't need to "dockerize" JMeter in order to load test the dockerized application, you can have it on one machine (if it's powerful enough) or go for Distributed testing.
However if you need the auto-scaling of JMeter slaves - you will need to consider a container orchestration solution like Docker Swarm or K8S, but this is a broader topic, from JMeter perspective there is no difference where master and slaves are executed: on bare metal, virtual machine or container.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
What is the advantage of storing a docker image somewhere when I could instead give my team my Dockerfile and have them each build the image locally?
Edit: No I won’t reword my question. Its not opinion based. I didn’t ask which is better. I only asked for the justification of images. Thankfully I got good answers before It got closed.
Some reasons:
Building of Dockerfile could be long or computationally exhaustive and so building only once would be beneficial.
Building of Dockerfile may require specific files/components that only exist on machine A or with person P and so giving a Dockerfile to someone to build just isn't possible without these sources. Or it may be that the Dockerfile contains data not meant to be shown to others (maybe raw passwords as inputs to some commands for example).
Ensuring no one messes around with the Dockerfile contents, and thus enforcing "repeatability."
Ease of use. Sometimes as a customer/user you just want to run the image, as opposed to figuring out how to build the thing then run it (thanks for #DazWilkin 's comment).
Building a docker image can take a while. It can also fail for network reasons (as can pulling a docker image from a repository).
But one of the key advantages of using an existing image is the same advantage for using any cache of stable, unchanging data. If you don't want to take any changes, why rebuild it? I use docker images for my build environments; these are something I only want to rebuild when I upgrade my build environments. There is a performance cost to doing that.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As I currently have a private server, which runs docker. Is it okay to have multiple containers running (for example 3 diffrent websites, znc server, some nodejs projects that i had containerized). Or should i run those containers per dockerhost?
As always it depends on your needs. It seems that you are hosting some private projects conveniently in docker containers. It's perfectly fine to run them on a single host. As long as you don't encounter any performance problems I would actually encourage you to stick with it. Because splitting them up means more administrative tasks. Maybe you can use that saved time somewhere else. Don't get me wrong. If you want to dive deeper into container orchestration with for example
Kubernetes you should actually do it. Because that's the next logical step to production grade hosting with techniques many successful companies use.
Security Concerns
File system, process, and memory isolation are core features of docker. But there could be very rare cases, e.g. the recent meltdown and spectre vulnerabilities, where one container is able to read data from an adjacent one on the same host.
So if you wanted to be completely sure and extremely high data security is your goal, you would need to deploy your containers on different virtual machines. One per instance.
Performance
If a container does nothing it won't consume much RAM/CPU/disk I/O at all. I have seen places running up to hundred containers on a single host. This means it actually depends on hardware and your running applications.