I just don't understand how docker works - docker

I've read some articles about
VM(vmware, virtualbox..) vs docker.
But I just can't understand what is going on..
There's an example of creating your own docker image.
They start with pulling ubuntu images from docker hub.
..install some stuffs in there...
django for example
and make all of it as an docker image.
Then, If you have docker installed in mac.
running that image should be like
(HOST) MAC > docker > ubuntu VM > django?
isn't it??
They say docker make it possible to run django like
MAC > docker > django image
But when you are making the image you starts with ubuntu..
and django must be ubuntu based django..
Where did I missed the point??
and some docker images like mysql ..
what is the base os of the that running mysql?
Is it possible to run that same docker image
in ubuntu / in centos / together??
how?

Don't see the "from Ubuntu" like a VM with Ubuntu, but just as the libs from Ubuntu to run the rest of the Docker image. Each container does not load an entire OS, but use its host ressources.
And see docker as a Cloud : you will have a process (a container) running something and listening on a specific port.

Related

Need help understanding how to run an app from docker.io

Newer to Docker and trying to understand how images work. I ran the following command:
sudo docker search hello-world
and it returned this:
docker.io docker.io/carinamarina/hello-world-app This is a sample Python web application,
I then ran:
sudo docker run docker.io/carinamarina/hello-world-app
...and this was the output from the terminal:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I don't understand. How can the IP address be 0.0.0.0? I entered that into a browser and got nothing. I tried localhost:5000 and got nothing.
How does one get to see this webapp run?
tl;dr
you need to publish the port to the host network to see the application working
long version:
well good for you to start working with docker
I will start with explaining a little bit about docker then I will explain what is happing over there
First of all, there is a difference between "image" and "container"
Image is the blueprint that is used to create containers from
so you write the definition of the image like (install this, copy that from the host or build that.......etc) in the image file and then you tell docker to build this image and then RUN containers from that image
so if you have like 1 image and you run 2 containers from it they both will have the same instructions( definition )
what happened with you
when you invoke the RUN command first thing you will see is
Unable to find image 'carinamarina/hello-world-app:latest' locally
That's mean that the local docker will not find the image(blueprint) locally with the name docker.io/carinamarina/hello-world-app so it will do the following steps
so it will start pulling the image from the remote registry
then
then it will start extracting the layers of the image
then it will start the container and show the logs from INSIDE CONTAINER
Why it didn't run with you
the application is running inside the container on port 5000
the container has a whole different network than the host that's running on (centos7 machine in your case )
you will have to make a port forwarding between the docker network and the host network so you can USE the application from the HOST
you can read more about that here docker networking
I recommend the following places to start with
let's play with docker
docker tutorial for beggines

How to backup and restore containers odoo and postgres to other host machine?

i have containers postgres and odoo in docker, docker is installed in ubuntu 18.04 machine.
I need to run containers odoo and postres in other machine , the probleme is how to backup and restore images and containers postgres version:9.6 and odoo version:11 , in the other laptop ?
In order to export container you should use docker save command:
docker save odoo | gzip > odoo.gz
docker save db | gzip > db.gz
where odoo and db are the names of containers that you want to export.
Then copy odoo.gz and db.gz files to the other laptop and import it using docker load command:
docker load < db.gz
docker load < odoo.gz
docker load will create an images named odoo and db which you should run to make containers based on them - use the same command as you used to run those containers on your initial laptop.
Please mind that docker save will export only container not mounted volume and (as you mentioned in the comment) you are using volume:
-v volume-pg:/var/lib/postgresql/
There is no easy way to export data from docker volume I am aware of, you can find suggested approach in official docker documentation for volume management.
You can find more details on docker save and docker load in official docker documentation:
docker save
docker load
PS. It looks like you are running independent docker containers instead of docker-compose which would manage the whole setup for you and is much better approach, you can find sample docker-compose.yml file on docker hub page of odoo. You can read about docker-compose here. Please be aware that docker-compose will not solve your volume migration issue and you have to migrate docker volumes manually as I suggested above.

GCE doesn't deploy GCR image correctly

I have followed this guide from Google documentation in order to be able to push a custom Docker image to Google Container Registry and then be able to start a new GCE instance with this image. At first I wanted to try using an anaconda3 public image from docker hub without any modification (in order to test).
So here is the steps I have done so far after installing gcloud and docker:
gcloud auth configure-docker -> configured docker with my gcloud crendentials
docker pull continuumio/anaconda3 -> pulled the public image
docker tag continuumio/anaconda3 eu.gcr.io/my-project-id/anaconda3 -> tagged the local image with the registry name as specified in the doc
docker push eu.gcr.io/my-project-id/anaconda3 -> pushed the image to GCR
Good ! I am now able to see my image trough GCR interface, and also able to deploy it with GCE. I choose to deploy it with a f1-micro instance, Container-Optimized OS 67-10575.62.0 stable, 10 Go boot disk, Allow HTTP traffic, etc.
But when I connect with ssh to the freshly new created VM instance, I can't find anaconda3 librairies (which are supposed to be created in /opt/conda). Instead, I can see a /opt/google directory which makes me think that the image has not been deployed correctly and GCE is using a default image...
So I tried to check if the image was pushed correctly in GCR, so I decided to delete my local image and pull it once again from GCR:
docker rmi -f eu.gcr.io/my-project-id/anaconda3
docker pull eu.gcr.io/my-project-id/anaconda3:latest
I run the image
docker run -t -i eu.gcr.io/my-project-id/anaconda3
and I can see that everything is fine, I have anaconda3 installed correctly inside /opt/conda with all the toolts needed (Pandas, Numpy, Jupyter notebook, etc.)
I tried to find people with the same problem as me without any success... maybe I have done something wrong in my proccess ?
Thanks !
TL;DR My problem is that I have pushed an anaconda3 image on Google GCR, but when I launch a virtual instance with this image, I do not have anaconda on it
It's normal that you can't find anaconda libraries installed directly on the GCE instance.
Actually, when you choose to deploy a container image on a GCE VM instance, a Docker container is started from the image you provide (in your example, eu.gcr.io/my-project-id/anaconda3). The libraries are not installed on the host, but rather inside that container (run docker ps to see it, but normally it has the same name as your VM instance). If you run something like :
docker exec -it <docker_container_name> ls /opt/conda
Then you should see the anaconda libraries, only existing inside the container.
When you run docker run -t -i eu.gcr.io/my-project-id/anaconda3, you're actually starting the container and running an interactive bash session inside that container (see the CMD here). That's why you can find anaconda libraries : you are inside the container!
Containerization software (docker here) provides isolation between your host and your containers. I'll suggest you to read documentation about containerization, Docker and how to run containers on Container-Optimized OS.

Run image in docker

I am new to docker and i watched many videos and also studied articles. From there i came to know what exactly docker is.
But my question is -:
Lets suppose i have three docker image
First image of "Application 1" is created in window 7/8/10 environment
Second image of "Application 2" is created in CentOs .
Third docker of "Application 3" image is create in Linux.
so , can i run all these three images simultaneously in single environment(Window or CentOS or Linux) ?
Surely you can ! That's the advantage of docker . Docker runs images on any platform without worrying that what is inside image. So on centos you can run a ubuntu image and vice-versa.
You can run any Linux Docker container image created recently on any Docker host running Linux. There are exceptions around various kernel features which you might not have access to on an older kernel, though, for example. Windows apps do not run on Docker on Linux unless you are doing something like running them under Wine.
There is Windows specific containers which only runs on windows hosts, but if you are using the standard (none-windows-exclusive) images, they run the same on all hosts.
One of the core ideas with docker is that you should be able to run your service in the exact same environment (the container) on any system (the host). Which works pretty well (with the exception of the windows specific containers!).

Which Docker images will run on Kubernetes?

How can I find out if a given Docker image can be run using Kubernetes?
What should I do to help ensure that my images will run well in any Kubernetes-managed environment?
All Docker images can be run on Kubernetes -- it uses Docker to run the images.
You can expose ports from containers just like when using Docker directly, pass in environment variables, mount storage volumes from the host into the container, and more.
If you have anything particular in mind, I'd be interested in hearing about any image you find that can't be run using Kubernetes.
It depends on the processor architecture of the machine. If the image is compatible with the underlying hardware architecture, the K8s master node should be able to deploy the container. I had this problem when I try to deploy a Docker container on Raspberry pi 3(ARM arch. machine) with the Docker image which is built for x86-64.
For practical, try to deploy a container with the following image in X86-64 machine:
docker pull arifch2009/hello
The error will be shown :
standard_init_linux.go:178: exec user process caused "exec format error"
This is a simple application to print "Hello World". However, the program/application inside the image is compiled in arm architecture. So, the binary file cannot be executed in other than ARM machine.

Resources