How I use a local container in a swarm cluster - docker

A colleague find out Docker and want to use it for our project. I start to use Docker for test. After reading an article about Docker swarm I want to test it.
I have installed 3 VM (ubuntu server 14.04) with docker and swarm. I followed some How To ( http://blog.remmelt.com/2014/12/07/docker-swarm-setup/ and http://devopscube.com/docker-tutorial-getting-started-with-docker-swarm/). My cluster work. I can launch for exemple a basic apache container (the image was pull in the Docker hub) but I want to use my own image (an apache server with my web site).
I tested to load an image (after save it in a .tar) but this option isn't supported by the clustering mode, same thing with the import option.
So my question is : Can I use my own image without to push it in the Docker hub and how I do this ?

If your own image is based on a Dockerfile that you build you can execute the build command on your project while targeting the swarm.
However if the image wasn't built, but created manually you need to have a registry in between that you can push to, either docker hub or some other registry solution like https://github.com/docker/docker-registry

Related

How can I make an image from the current environment to share docker hub?

I have some services running when I run docker-compose up command. Now I want to make an image from the current environment and share it with docker hub so that every time I can use docker pull/run my_own_image command from the docker hub.
Is there any way to do that?
Pretty much anything that you can do with images or containers with Docker, you can do with Compose. In your case, since you can push your custom image to your Docker Hub registry using docker image push (or docker push) command, you can do the same with Compose.
As for Compose, you use docker-compose push (no surprises there – consistency between APIs/CLIs).
Tip: when in doubt, use --help. It's the best way (next to Google) to explore CLI. If not sure what are available commands/options for Compose, just type docker-compose --help. If you want to see available options for push command (for example), use docker-compose push --help.

How can I docker commit azure container instance to azure container registry

We have ansible configured to deploy our various applications on IIS environment. I am trying to create a docker image of deployed applications so that I can just start up containers as we need for testing and otherwise.
I am planning to build on the Windows IIS image, start the container on azure, run our ansible to install everything on the server, then save the image on container.
I cannot find any documentation on how I can docker commit the container image into our private azure container registry.
Is it possible?
If you have an existing Docker registry in azure you should be able to use the az acr login --name myregistry command to authenticate to it https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli. Make sure you have a registry created for the container image you want to push up.
Next, you can run the container in azure and do all the installation you want. SSH or RDP into the instance in Azure that is running this container. Now run docker ps and find the container id for the correct container. Next, use docker commit <container id> myregistry.azurecr.io/samples/nginx.
Then, just docker push myregistry.azurecr.io/samples/nginx
Also not sure what your use case is, but starting a container in order to modify and commit it in that way seems like an atypical use case for Docker since the build isn't reproducible via the Dockerfile. Looks like there are ways to replace Dockerfiles using Ansible playbooks with something like ansible-containers https://docs.ansible.com/ansible-container/ so you might want to take a look at that(I've never used this tool).

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

Local development and swarm service image update

We are using Docker Swarm on developers machines for development. Docker services is using e.g. foo:beta image.
When a developer builds a new feature for foo, he builds a new image of the container locally, under the same name (sha is different).
However, we are not being able to update the service to use the new image version. We tried
docker service update --force --image <component>
w/o success.
We are running the latest edge docker build: 17.05.0-ce-rc1-mac8 (16582)
The key is to use a local tag for images, that does not exist on remote repository. When Swarm can't find the image by given tag on remote repo, it will use the local one.
For that purpose, we tag all developer-related containers also with e.g. dev tag, that only exist on developers machine. This way we can update the image and by forcing the service update, update the running code.

How to use Java Buildpack in Bluemix Docker?

I am new to Bluemix and docker. I want to use Java buildpack instead of the default IBMLiberty in docker container on Bluemix.
is it possible? I tried searching on internet but could not find relevant information.
Buildpack and docker (IBM Container) images are two different things.
The IBMLiberty docker/container images has a Liberty runtime deployed on it, but it is not using buildpack technology.
If you don't want to use IBMLiberty container images, you can load docker hub images into your private registry. The link below contains the information.
https://console.ng.bluemix.net/docs/containers/container_images_pulling.html
Basically, you load a docker hub image locally and push it to your private registry in Bluemix. One of the image choice is ubuntu.
https://hub.docker.com/_/ubuntu/
After you loaded the image to your private registry, you can use "cf ic run" to run a container using that image. Here is the reference,
https://console.ng.bluemix.net/docs/containers/container_cli_reference_cfic.html#container_cli_reference_cfic__run
Here are more info on using Docker image on Bluemix and cf ic commands,
https://console.ng.bluemix.net/docs/containers/container_images_adding_ov.html
https://console.ng.bluemix.net/docs/containers/container_cli_reference_cfic.html

Resources