Using docker-compose, how do I share my image to docker-hub? - docker

I'm very new to docker, I made a simple django app with docker-compose.
How do I post it to docker hub so someone can run docker run against it?

Docker hub is a repository for Docker images (make with a Dockerfile). When you use docker-compose your are simply connecting together one or more images on docker hub using your composition (the yaml that describes the images and how to connect them). You aren't making an Image with docker-compose. I don't think there is a place to store/share compositions (yet) at Docker. However, you might take a look at tutum.co. There you can save your docker-compose (they call them stacks) and deploy them from tutum. Full disclosure, I have nothing to do with tutum.co.

As the last answer said, Docker hub is only for Docker images (i.e. dockerfiles).
Tutum was bought by docker and recently shut down:
Tutum has shut down as of May 31st 2016. An evolution of the service
is now offered in Docker Cloud
It looks like there is an attempt (end of 2015) to create a kind of hub for docker-compose.yml files, using github: composehub.com
It doesn't seem very dynamic ( 55 stars on github, last update a year ago, on October 2015)

Related

Moving my Docker application which is a collection of containers

I have been reading various articles for migrating my Docker Application into a different machine. All the articles talk about “docker commit” or “export/ import”. This only refers to a single Container, which is first converted to an Image and then we do a “docker run” on the new machine.
But my application is usually made up of several containers, because I am following the best practice of segregating different services.
The question is, how do I migrate or move all the containers that have been configured to join together and run as one. I don’t know whether “swarm” is the correct term for this.
The alternative I see is - simply copy the “docker-compose” and “dockerfile” into the new machine and do a fresh setup of the architecture. Then I copy all the application files. It runs fine.
My purpose, of course is not the only solution, but it's quite nice:
Create docker images in one machine (where you need your Dockerfile)
Upload images to a docker registry (you can use your own docker hub account, or maybe a nexus, or whatever)
2.1. It's also recommended to tag with version your images, and protect overwritting an image with the same version and different code.
Use docker-compose (it's recommended define a docker network for all docker that have to interact among them) to deploy (docker-compose up is like several docker run, but easier to mantain.)
You can deploy in several machines just using the same docker-compose.yml to deploy and access to your registry.
4.1. Deploy can be done in a single host, swarm, kubernetes... (you'd have to translate your docker-compose.yml to kubectl yml file for that)
I agree to the docker-compose suggestion. And to store your images in a registry or on your local machine. Each section in your docker compose file will be separated per service. Each service will have to be written in YAML format.
You are going to want version 3 YAML I believe. Then from there you code something like below. But each service will use your Dockerfile image in your registry or locally in your folder.
version : '3'
services:
drupal:
image:
......ports, volumes, etc
postgres:
image:
......ports, volumes, etc
Disclosure: I took a Docker Course from Bret Fisher on Udemy.

Some questions on Docker basics?

I'm new to docker.Most of the tutorials on docker cover the same thing.I'm afraid I'm just ending up with piles of questions,and no answers really. I've come here after my fair share of Googling, kindly help me out with these basic questions.
When we install a docker,where it gets installed? Is it in our computer in local or does it happen in cloud?
Where does containers get pulled into?I there a way I can see what is inside the container?(I'm using Ubuntu 18.04)
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
Looks like you are confused after reading to many documents. Let me try to put this in simple words. Hope this will help.
When we install a docker,where it gets installed? Is it in our
computer in local or does it happen in cloud?
We install the docker on VM be it you on-prem VM or cloud. You can install the docker on your laptop as well.
Where does containers get pulled into?I there a way I can see what is
inside the container?(I'm using Ubuntu 18.04)
This question can be treated as lack of terminology awareness. We don't pull the container. We pull the image and run the container using that.
Quick terminology summary
Container-> Containers allow you to easily package an application's code, configurations, and dependencies into a template called an image.
Dockerfile-> Here you mention your commands or infrastructure blueprint.
Image -> Image gets derived from Dockerfile. You use image to create and run the container.
Yes, you can log inside the container. Use below command
docker exec -it <container-id> /bin/bash
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
You can pull the opensource image from Docker-hub
When you clone the git project which is docerized, you can look for Dockerfile in that project and create the your own image by build it.
docker build -t <youimagenae:tag> .
When you build or pull the image it get store in to your local.
user docker images command
Refer the below cheat-sheet for more commands to play with docker.
The docker daemon gets installed on your local machine and everything you do with the docker cli gets executed on your local machine and containers.
(not sure about the 1st part of your question). You can easily access your docker containers by docker exec -it <container name> /bin/bash for that you will need to have the container running. Check running containers with docker ps
(again I do not entirely understand your question) The images that you pull get stored on your local machine as well. You can see all the images present on your machine with docker images
Let me know if it was helpful and if you need any futher information.

How to get transferable docker compose stack without dockerhub

I have few docker images composed together in the stack using docker-compose.yml.
Now I want to transfer whole docker compose stack to the other host machine without uploading to the dockerhub,
And deploy it on the docker swarm.
I saw there is a thing called docker compose bundle, would that help?
If you’re deploying on a multi-host swarm (or something similar like Kubernetes or Nomad) you all but need a Docker registry. It doesn’t specifically have to be Docker Hub — quay.io, Amazon’s ECR, Google’s GCR, and self-hosted registries all work fine — but you do need to have pushed the built images somewhere where the orchestrator can retrieve them by name.
I’ve never used docker-compose bundle myself, but its documentation also notes that its operation “requires interaction with a Docker registry”.
The only real alternative is using docker save and docker load to manually move images between machines, but as a manual process it will get tedious very quickly, and you need to make sure an identical set of images are on every machine for consistency. Using a registry will be vastly easier.
The easyest way to do it is to use a Docker registry. The problem with Docker Hub is that you can only have one private registry, the rest must be public or paid.
Thankfully, there are other (free) alternatives:
Deploy your own private registry. Here is a nice tutorial where you can try it in the browser.
Use a free private registry. I personnaly use Codefresh. It can automatically build your image from a private repo (like bitbucket who has free plan too), but you can also just use it like a "simple" docker registry and push and pull your Docker images there.

Upgrade docker container to latest image

We are trying to upgrade docker container to latest image.
Here is the process i am trying to follow.
Let's say i have already pulled docker image having version 1.1
Create container with image 1.1
Now we have fixed some issue on image 1.1 and uploaded it as 1.2
After that i wanted to update container running on 1.1 to 1.2
Below are the step i thought i will follow.
Pull latest image
Inspect docker container to get all the info(port, mapped volume etc.)
Stop current container
Remove current container
Create container with values got on step 2 and using latest image.
The problem I am facing is i don't know how to use output of "Docker Inspect" command while creating container.
What you should have done in the first place:
In production environments, with lots of containers, You will lose track of docker run commands. In order to keep up with complexity, Use docker-compose.
First you need to install docker-compose. Refer to official documents for that.
Then create a yaml file, describing your environment. You can specify more than one container (for apps that require multiple services, for example nginx,php-fpm and mysql)
Now doing all that, When you want to upgrade containers to newer versions, you just change the version in the yaml file, and do a docker-compose down and docker-compose up.
Refer to compose documentation for more info.
What to do now:
Start by reading docker inspect output. Then gather facts:
Ports Published. (host and container mapping)
Networks used (names,Drivers)
Volumes mounted. (bind/volume,Driver,path)
Possible Run time command arguments
Possible Environmental variables
Restart Policy
Then try to create docker-compose yaml file with those facts on a test machine, and test your setup.
When confident enough, Roll it in production and keep latest compose yaml for later reference.

How I use a local container in a swarm cluster

A colleague find out Docker and want to use it for our project. I start to use Docker for test. After reading an article about Docker swarm I want to test it.
I have installed 3 VM (ubuntu server 14.04) with docker and swarm. I followed some How To ( http://blog.remmelt.com/2014/12/07/docker-swarm-setup/ and http://devopscube.com/docker-tutorial-getting-started-with-docker-swarm/). My cluster work. I can launch for exemple a basic apache container (the image was pull in the Docker hub) but I want to use my own image (an apache server with my web site).
I tested to load an image (after save it in a .tar) but this option isn't supported by the clustering mode, same thing with the import option.
So my question is : Can I use my own image without to push it in the Docker hub and how I do this ?
If your own image is based on a Dockerfile that you build you can execute the build command on your project while targeting the swarm.
However if the image wasn't built, but created manually you need to have a registry in between that you can push to, either docker hub or some other registry solution like https://github.com/docker/docker-registry

Resources