Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
What is advantage of using docker in local machine for running app?
And difference without using do docker.
Reproducibility. No more "works on my machine".
Furthermore, we can deploy all our dependencies (relational database, document-based database, graph-database, messaging-system, ....) through docker (e.g. through a docker-compose file and thus eases development.
Another advantage is that - in case we deploy to a container-based environment - we can use the exact same images used in production and thus improve dev-prod-parity.
There are a lot of advantages:
You can easily install few versions of different software without any collisions (e.g. 10 versions of MongoDB).
As previous commentator said - it creates isolated environment similar to your production (the only difference is the actual number of resources, such as CPU/GPU/RAM/etc.).
Easy setup for new developers (no need to manually install each separate tool and resolve issues with installations/configuration/etc.).
Ability to quickly deploy test environments, or new servers, or deploy this app on your brand new laptop)).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed yesterday.
Improve this question
One of web tools we intend to use requires docker for installation. Due to limitation in resources, however, the only way for us to deploy this tool is on a shared university php webserver with an associated MySQL database. My question is, can you somehow convert or even "compile" this docker-dependent tool to get some simple package, similar, for instance, to Wordpress? Indeed, as per my understanding, Wordpress development does require docker, while the final package for Wordpress installation does not.
Is this operation of docker-removal possible and is there a standardised workflow? The tool in question is located in the following repository.
I have tried to install the tool as is, being blocked by the lack of admin privileges and the absence of docker on the described university webserver. I have experience in setting up Wordpress, I would expect for my tool of interest to have a more sophisticated installation process (compared to the current 3 steps) without docker and, for instance, to also require manual connection to an SQL-database.
Please excuse me for my limited understanding and layman terms, I am sadly not coming from a computer science background.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've been trying to get my ahead around the purpose of docker for a couple of days now.
I found this answer very useful.
However, it raised more questions.
If I am deploying my app to Heroku, Azure, Rackspace, Netlify, etc — then should I consider Docker? If I'm maintaining my own package updates, making sure nothing breaks at the next deployment and that my dependencies are all in check, then what purpose would I have for Dockerizing my app?
Moreover, in the case of Heroku for instance, they give you the option to deploy with Docker. This makes no sense to me, and here's why.
Consider that I have a typical React app with a Node backend and MongoDB database. I can run create-react-app to spin up my frontend and code out at most 15 lines in a single server.js file to have my full stack app up and running in 3 minutes. What is the purpose of all of these images available on Dockerhub? Why would I need a Node image? Why would I need a MongoDB image, a React image? Why complicate everything that a package-lock.json file would keep simple? Perhaps my understanding of "image" is incorrect?
How is adding arbitrary images of dependencies that are used by my project in any way useful? If I'm using MongoDB in my project, I'll need to add the npm package, add mongoose as well, and code out 5 lines of code to connect to my db on Atlas — where I'd have to do additional work, like building a cluster anyway. Why on earth would I ever need a docker image of MongoDB, if it'd be much easier to just npm install it and keep a working, stable version of it in package-lock.json?
By that logic, heck, why not just throw my whole dependencies object out of package.json altogether and use only docker images?
I'd appreciate some clarity on this. I understand the whole "quick new-hire deployment" aspect from the link I posted above, but Docker honestly just sounds and seems like an unnecessary overcomplication to me.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I wanted to get a bit of advice from the StackOverflow community on best practices/guidelines when inheriting a Rails app from another developer.
I am currently in the process of assuming control of development at my place of work. I have decent experience in front-end, SQL/Mongo, and Node.js, and a good amount of knowledge of Ruby. However, I do not have very much experience with Rails, per se.
The previous developer is being fairly unhelpful in providing dependencies and software versions of the various packages in use by the app. However, I have been able to get the following information and I have installed these dependencies (although they may differ from the versions needed by the app):
Postgres
Heroku CLI
AWS CLI
Redis
Sidekiq
AngularJS
Would any of you guys be able to briefly delineate the next steps of getting a previously existing app running (or point me to another source)?
Any help you all can provide is much appreciated. Thank you!
Things you need to retain:
Access credentials to all production servers and used services (including domain name and backup servers if there are any). It is not obligatory to be you, but someone at the company should have them (there may be some security/privacy related issues).
Access to source code
A fresh production backup (if possible)
Most of versions can be inferred from production system once you have full access.
Some others (like sidekiq etc) are in Gemfile.lock and yarn.lock files.
Then try to bring the system up from the backup - if you succeed - you'll be sure that everything is ok
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As I currently have a private server, which runs docker. Is it okay to have multiple containers running (for example 3 diffrent websites, znc server, some nodejs projects that i had containerized). Or should i run those containers per dockerhost?
As always it depends on your needs. It seems that you are hosting some private projects conveniently in docker containers. It's perfectly fine to run them on a single host. As long as you don't encounter any performance problems I would actually encourage you to stick with it. Because splitting them up means more administrative tasks. Maybe you can use that saved time somewhere else. Don't get me wrong. If you want to dive deeper into container orchestration with for example
Kubernetes you should actually do it. Because that's the next logical step to production grade hosting with techniques many successful companies use.
Security Concerns
File system, process, and memory isolation are core features of docker. But there could be very rare cases, e.g. the recent meltdown and spectre vulnerabilities, where one container is able to read data from an adjacent one on the same host.
So if you wanted to be completely sure and extremely high data security is your goal, you would need to deploy your containers on different virtual machines. One per instance.
Performance
If a container does nothing it won't consume much RAM/CPU/disk I/O at all. I have seen places running up to hundred containers on a single host. This means it actually depends on hardware and your running applications.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I was going through Puppet and comparing it with Docker.
I came to know that Puppet is used for configuration management for scalable infrastructure. New VM's setup can be done with same configuration easily etc.
Seems that Docker is also capable of all these though in a different way.
Is docker replacing the configuration management tools like puppet, chef etc?
Please help me to understand.
Thanks in advance.
Unsure of if this question belongs here or not, but never the less, here is some source material that probable explains it better than me: http://cloudify.co/2014/10/30/Docker-cloud-orchestration-configuration-management.html
Docker operates in a different manner than Chef or Puppet. Docker is (with limited exceptions) a static system. Chef et. al. are dynamic in nature. If you seek to change a fleet of Docker provisioned services you would create a new Docker container, push it out and blow away your old ones.
Chef et. al. instead check frequently for state changes and when they occur they pull those changes down and converge. This leaves room for having parts of the server automated and some not (if its a difficult to manage portion, for instance, or for emergency repairs).
Of the two Docker is the stronger model in my opinion but even then you should have some well defined CM to create your docker images, such as serverless Chef, Ansible or other.