Docker Registry vs Nexus/Artifactory - docker

Is the 'Docker registry' in Docker engine similar to Nexus/Artifactory? What are the similarities and differences between them? If we already have Nexus, can we use it as an alternative to Docker registry and plug it into the Docker engine?
Could someone help me clear this?

A Docker registry is a repository for managing Docker images. The registry is a service of its own and not part of the Docker engine.
The registry has a similar usage to repository managers like Artifactory or Nexus, with one big difference: a repository manager will usually be able to manage repositories for different types of technologies, for example: Maven, NPM, Ruby Gems, CocoaPods, Git LFS, Python Eggs and others. A pure Docker registry will only manage Docker images.
There are couple of things you should take into consideration when choosing a tool for managing your Docker registry:
Performance - Docker images can be big. In a CI/CD environment generating large numbers of Docker images a day you need a tool that will able to deal with the load and scale as you grow. Some tools offers a clustered (HA) version which allows spreading the load between multiple nodes.
Storage management - Docker images consumes a lot of storage space. It is better to choose a tools which manages the required storage efficiently:
Supports deduplication of image layers between images and repositories
Efficiently cleans of unused image layers (garbage collection). Notice that some tools offers a stop the world GC mechanism which hurts performance.
Offers cleanup procedures/mechanisms for images which allows deleting images based on age, usage etc.
Supports multiple storage backends - file system, object storage
Support for multiple registries - some tools limits you to managing a single registry while other allows managing multiple registries in parallel. This is useful when you need to separate between snapshots and production ready images.
Support for the latest Docker version - the Docker registry API and manifests format are changing often. Make sure you choose a tool which supports all the latest changes.
Universal - If you need to manage more than Docker images, which is usually the case since you also use tools like NPM, Bower, Yum and others which also requires a registry, choose a universal repository manager which supports such technologies.
Enterprise ready - look for a tool which is enterprise ready with support for features such a LDAP connectivity, role based access control, high availability, multi site development etc.
Disclaimer: I work for JFrog the company behind Artifactory

Related

which is better way to install jenkins docker or kubernetes

I am new to devops. I want to install jenkins. So out of all options available to install jenkins provided in official documentation which one should I use. I am zeroed on docker or kubernetes. So parameters I am looking for decision are below.
portability - can be installed on any major os or cloud provider.
minimal changes to move to production.
Kubernetes is a container orchestrator that may use Docker as its container runtime. So, they are quite different things—essentially, different levels of abstraction.
You could theoretically run an application at both of these abstraction levels. Here's a comparison:
Docker
You can run an application as a Docker container on any machine that has Docker installed (i.e. any OS or cloud provider instance that supports Docker). However, you would need to implement any operations-related features that are relevant for production, such as health checks, replication, load balancing, etc. yourself.
Kubernetes
Running an application on Kubernetes requires a Kubernetes cluster. You can run a Kubernetes cluster either on-premises, in the cloud, or use a managed Kubernetes service (such as Amazon EKS, Google GKE, or Azure AKS). The big advantage of Kubernetes is that it provides all the production-relevant features mentioned above (health checks, replication, load balancing, etc.) as part of the platform. So, you don't need to implement them yourself but just use the primitives that Kubernetes provides to you.
Regarding your two requirements, Kubernetes provides both of them, while using Docker alone does not provide easy production-readiness (requirement 2). So, if you're opting for production stability, setting up a Kubernetes cluster is certainly worth the effort.

Where do I save docker-compose files of a swarm?

We are running Docker Swarm + Ceph. All runs okay. We plan to move more of our internally developed as well as third party applications to Swarm. Now the issue I have is as follows:
Let’s say I deployed 10 third party applications as Swarm stacks. I do this with Docker stack deploy command. I supply docker-compose file to it (often, in its turn composed out of multiple docker-compose-*.yml files compiled via docker-compose config). Now, if I want to change something in the stack, I often need my initial compose-files. Is there any kind of registry for docker-compose deployment descriptors? Like docker image registry but for docker-compose descriptors?
One of the ideas I have is to maintain Git or Mercurial repository with some directory structure to version the descriptors. This idea looks interesting, but works (semi-)well just for third party applications. With our own applications this adds a problem, as we often use CI/CD. And this would mean that we need to checkout one more repository during deployment, replace deployment descriptors for our apps and commit them. This may be a little tricky, as it may potentially lead to merge conflicts, etc.
Ideally, the solution I am looking for shall provide an easy way to get the latest versions of deployment descriptors of a particular (deployed) stack at the same time holding previous versions of them.
How do you manage your docker-compose files when there are too much of them?

Can I run a docker container doing a x86 build on a IBM Power system?

Our build setup is backed into a large docker container (basically a 2 GB image coming with a complete X86 linux in itself).
We have two ways to actually build: the official approach is jenkins environment (running on X86 hardware). But we also have a little "side X86 server" running RH 7. Developers can log into that RH server and kick off specific builds (using said docker images) themselves.
Those RH servers will be shut down at some point, to be replaced with IBM Power8 machines (running RH7 Little Endian for power).
I am simply wondering: is there a chance that our existing build setup and docker images simply work on Power8? Or are the fundamental technical issues that make it unlikely and not even worth trying?
You can probably use your existing build methodology and scripts close to unchanged, but you'll need to rebuild the actual images.
You can't directly run x86 binaries on Power (at a very low level, the bytes of machine code are just different). Docker doesn't contain any sort of virtualization layer; it does a bunch of setup to isolate the container from the host, but then runs the binaries in an image directly.
If your Jenkins setup has enough parameters for image names and version tags, then you should be able to run the x86 and Power setups side-by-side; you need to encode the architecture somewhere in the built image name or tag; for instance, repo.example.com/app/build:20180904-power. (I don't know that one or the other is considered better if you control all of the machinery.) If you have a private repo, you could encode it earlier in the path, winding up with image names like repo.example.com/power/build:20180904.
You'd need to double-check that everywhere that has a Docker image reference has it correctly parameterized (which is a good practice anyways). That would include any direct docker run commands; any Docker Compose or Kubernetes YAML files or similar artifacts; and the FROM line of any Dockerfiles.
Existing build setup? Not sure!
Docker images? NO, don’t even try.
Docker images are actually multiple layers which stored on filesystem through corresponding storage driver and backing filesystem(shown in the output of docker info).
If storage driver/backing filesystem has been changed, which likely be true when OS changed, older docker images could not be valid any more. Meaning they must be rebuilt for sure.

Docker for non-code deployments?

I am trying to help a sysadmin group reduce server & service downtime on the projects they manage. Their biggest issue is that they have to take down a service, install upgrade/configure, and then restart it and hope it works.
I have heard that docker is a solution to this problem, but usually from developer circles in the context of deploying their node/python/ruby/c#/java, etc. applications to production.
The group I am trying to help is using vendor software that requires a lot of configuration and management. Can docker still be used in this case? Can we install any random software on a container? Then keep that in a private repository, upgrade versions, etc.?
This is a windows environment if that makes any difference.
Docker excels at stateless applications. You can use it for persistent data style applications, but requires the use of volume commands.
Can docker still be used in this case?
Yes, but it depends on the application. It should be able to be installed headless, and a couple other things that are pretty specific. (EG: talking to third party servers to get an license can create issues)
Can we install any random software on a container?
Yes... but: remember that when the container restarts, that software will be gone. It's better to create it as an image, and then deploy it.See my example below.
Then keep that in a private repository, upgrade versions, etc.?
Yes.
Here is an example pipeline:
Create a Dockerfile for the OS and what steps it takes to install the application. (Should be headless)
Build the image (at this point, it's called an image, not a container)
Test the image locally by creating a local container. This container is what has the configuration data such as environment variables, the volumes for persistent data it needs, etc.
If it satisifies the local developers wants, then you can either:
Let your build servers create the image and publish it an internal
docker registry (best practice)
Let your local developer publish it
to an internal docker registry
At that point, your next level environments can then pull down the image from the docker registry, configure them and create the container.
In short, it will require a lot of elbow grease but is possible.
Can we install any random software on a container?
Generally yes, but you can have many problems with legacy software which was developed to work on bare metal.
At first it can be persistence problem, but it can be solved using volumes.
At second program that working good on full OS can work not so good in container. Containers have some difference with VM's or bare metal. For example due to missing init process some containers have zombie process issue. About others difference you can read here
Docker have big profit for stateless apps, but some heave legacy apps can work not so good inside containers and should be tested good before using it in production.

Using docker to run a distributed computation

I just came across docker, and was looking through its docs to figure out how to use this to distribute a java project across multiple nodes, while making this distribution platform independent i.e the nodes can be running any platform. Currently i'm sending classes to different nodes and running it on them with the assumption that these nodes have the same environment as the client. I couldn't quite figure out how to do this, any suggestions wouldbe greatly appreciated.
I do something similar. In my humble opinion Docker or not is not your biggest problem. However, using Docker images for this purpose can and will save you a lot of headaches.
We have a build pipeline where a very large Java project is built using Maven. The outcome of this is a single large JAR file that contains the software we need to run on our nodes.
But some of our nods also need to run some 3rd party software such as Zookeeper and Cassandra. So after the Maven build we use packer.io to create a Docker image that contains all needed components which ends up on a web server that can be reached only from within our private cloud infrastructure.
If we want to roll out our system we use a combination of Python scripts that talk with the OpenStack API and create virtual machines on our cloud, and Puppet which performs the actual software provisioning inside of the VMs. Our VMs are CentOS 7 images, so what Puppet actually does is to add the Docker yum repos. Then installs Docker through yum, pulls in the Docker image from our repository server and finally uses a custom bash script to launch our Docker image.
For each of these steps there are certainly even more elegant ways of doing it.

Resources