I've found that the Windows file patching mechanism MSDelta creates much smaller patches than Linux alternatives like xDelta3 and bsdiff. This functionality is crucial to an application we are developing.
Is there any way to run MSDelta in a Docker container?
Related
I have come across the word IaC many times while learning DevOps and when I googled it to know what it is it showed that it used code as it is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. So is docker also a infrastructure as code technology because it virtualizes an OS to handle multiple workloads on a single OS instance? Thanks in advance
I'm not sure exactly what you are asking, but Docker provides infrastructure as code because the Docker functionality is set via Dockerfiles and shell scripts. You don't install a list of programs manually when defining an image. You don't configure anything with a GUI in order to create an environment when you pull an image from Docker hub or when you deploy your own image.
And as said in another answer, Docker is not virtualization, as everything is actually running in your Linux kernel, but with limited resources in its own namespace. You can see a container process via htop in the host machine, for instance. There's no hypervisor. There's no overhead.
I think you misunderstud the concept, because neither Docker is an hypervidor nor containers are VMs.
From this page: https://www.docker.com/resources/what-container
A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker containers - images become containers when they run on Docker Engine.
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space.
Does Chef's new Habitat tool somehow work with Docker? If so, what problem is Habitat trying to solve or is it just trying to replace tools in the Docker toolset (e.g., Docker Swarm, Docker Machine, Docker Compose, etc.)?
This is skirting the limits of StackOverflow's policy on open-ended questions, but I'll answer anyway:
Docker and Habitat don't really overlap much. The main point of competition is on building release artifacts. Docker has Dockerfiles and docker build, Habitat has plans and the Studio. The output of both can be a Docker image though, which is basically a tarball of a filesystem along with some metadata. Habitat is aimed more are building super minimal artifacts, i.e. not including a Linux distro of any kind, no package manager, just statically compiled executable code and whatever support files you need for that specific app.
As for runtime, they are 100% orthogonal. Docker is a way to run a process inside a bunch of Linux security features collectively called a "container" now. Habitat is a little stub that surrounds your process and handles things like runtime config distribution, secrets transfer, and service discovery. Those features are more overlapping with higher-level tools like Kube but even there it's only barely overlapping. You need something to actually start hab-sup, which could be docker run (possibly via Swarm), Nomad, Kube, or even a non-container system like Upstart or Runit if you wanted to. The only interaction point between those is those tools all start an entrypoint process, and hab-sup is a generic entrypoint process that gives whatever app it runs underneath some cool features if they want to use 'em.
I am trying to understand how docker can be used to dockerize multilayered application.
My tomcat application needs mongodb, mysql, redis, solr and rabbitmq. I am playing with Docker for couple of weeks now. I am able to install and use mongo/mysql containers. But I am not getting how can I completely ship application using Docker. I have few questions.
How should the images be. Should I have one image that has all the components installed or have separate images (like one for tomcat, one for mongo, one for mysql etc) and start those containers using a bash script outside of docker.
What is the docker way of maintaining multiple containers at once. Meaning say I have multiple containers (like mongo, mysql, tomcat etc...) that needs to be worked together to run my application, Is there any inbuilt way of dealing this so that one command/script does this?
Suppose I dockerize my application, how can i manage various routine tasks that need to be performed like incremental code deployment, database patches etc. Currently we are using vagrant, we also use fabric along with vagrant for various tasks.Like after vagrant up we use fab tasks for all kind of routine things like code deployment, db refresh, adding volumes, start/stop services etc. What would be the docker's way of doing this?
With Vagrant if VM crashes due to High CPU etc. host system is not affected. But I see docker is eating up lot of host resources. Can we put limits for that say not more than one cpu core for that container etc..?
Because we use vagrant, most of the questions above are in that context. When started with docker I thought docker as a kind of visualization technology that can be a replacement for our huge Vagrant based infra. Please correct me if I am wrong?
I advise you to look at docker-compose:
you'll be able to define an architecture of your application
you can then easily build it and run it (with one command)
pretty much same setup for dev and prod
For microservices, composition etc I won't repost on this.
For containet resource allocation:
Docker run has various resource control options (using google cgroups) see my gist here
https://gist.github.com/afolarin/15d12a476e40c173bf5f
I use a Mac for development and deployment, and have a need for creating an isolated environment. I've been exploring vagrant and docker and it seems that in order to run Docker, I need to be on a linux environment. I'm running an instance of vagrant with Ubuntu, the same as my partner uses on their desktop.
My question is, can my partner run the docker container off their Ubuntu instance instead of having to setup Vagrant like myself? Does my server and app run inside my Docker instance? (I'm using MEAN).
Trying to build a workflow and piece it all together.
He could probably get docker to run but packaging it all inside of a vagrant VM really is the way to go as that will keep it transportable across the board.
You can skip the vagrant file and just share the Docker images. There should be no detectable host differences from within the container.
I am totally new to Docker and have only, so far, used the images available on Docker repos.
I have already tested and being using docker for some aspects of my daily activities and it works great for me but in some specific cases I need a "virtual" image of Linux with graphic support(X in Ubuntu or CentOS) and so far I have only encountered on Docker repos images that by default don't have X support.
Should I in this case use a standard Virtual Box or VMWare image? Is it possible to run a visual version of Linux in a docker container? I haven't tried it yet.
If you run your containers in privileged mode they can access the host's resources (and to anything else for that matter), so in essence it is possible though I'd be willing to bet that it turns out to be more trouble than it's worth because the containers won't be quite as portable as ones that don't require such outside resources.