how to add docker to a ubuntu 14 qcow2 image - docker

I have a Ubuntu 14 .qcow2 cloud image, which I'm running with Fedora 20 KVM and OpenStack. I would like to add / install docker to that image and "re-save" is as a base image, so any VMs created from it would have docker pre-installed.
How is this done?

Since I'm developing on a Mac, using Parallels, I found these instructions, which worked for me:
http://vanappdeveloper.com/2013/07/04/converting-parallels-vm-to-linux-kvm/
Basically, this involves installing Ubuntu under Parallels, installing docker on the freshly installed Ubuntu OS, navigating over to the actual .hds file (make sure you grab the one with non-zero size!), copying that file over to the machine running qemu, and doing the emu conversion. By following these instructions, I was able to upload the resulting .qcow2 image file to my OpenStack instance, and launch a couple of VMs, which where pre-configured with docker.

Related

Running docker container as amd64 machine using a M1 Mac

I have a Docker file which creates an image and then I run it using docker compose together with a container built using a Postgres image. (To set up a local environment of Airflow - we use the mwaa local runner).
Recently I got a new M1 pro machine and I’m getting into issues running the container.
The problem is, from my understanding, is that the image is being built and then run using my machine which has a different kind of cpu architecture, which causes pip to look for wheels for this kind of architecture. My college has an intel Mac and he says he doesn’t experience any issues.
The build phase is ok, but when I run the container, we’ve set docker compose to run an entrypoint script that also installs some airflow providers and other dependencies, one of which is plyvel, which fails to install and cause other packages not to install as well. When I remove plyvel from the requirements.txt file, the installation completes but some of my airflow providers are missing some files or attributes which create its own issues.
I tried forcing docker to building and running the image and container using amd64 by changing the build command to:
docker build --platform linux/amd64 --rm --compress $3 -t amazon/mwaa-local:2.2 ./docker which runs but runs very slowly.
Also, added platform: linux/amd64 in docker-compose file to both the postgres and the local-runner containers.
Then, when I spin up the container, it takes a lot of time to get into a working state when I can access the airflow ui in the web browser and then it is very slow in the ui - every link is taking a few seconds to process and direct me to the new place. I believe this is due to some emulation or something.
I then found this article:
https://medium.com/nttlabs/buildx-multiarch-2c6c2df00ca2
It says there is a faster way to run without emulation but I didn’t understand how to implement.
In addition, found this Reddit thread:
https://www.reddit.com/r/docker/comments/qlrn3s/docker_on_m1_max_horrible_performance/
They suggest building and running the container inside a virtual machine, not sure if that is the way to go in my situation.
I tried both Docker Desktop and Rancher Desktop (with dockerd ) but both shows the same symptoms.

Using Docker and Cypress in Same Docker Image

Fair warning: I'm new to all of this, so there there might be some mistakes in my thinking process.
I want to system test an application we are developing, and we ship this application via Docker, so that's what I want to test.
For GitLab CI, this means creating a Docker image which has Docker in Docker and Cypress, since that is what I'd like to use.
So just from checking the Docker docs I can see that Docker can be installed on a multitude of Linux distros, but not on Alpine. The official image however is Alpine.
The Cypress docs however show that Cypress can not be installed to Alpine. Only the package managers "apt-get" and "yum" are supported, which is Ubuntu and Fedora, respectively.
So as far as I can tell, it's not possible to have both of these at once? Which would be absolutely baffling (but so is the package manager chaos I just learned about).
What I tried:
used the Docker image as a base and tried to install Cypress (does not work because there is no installation manual and the packages you need to install via apt-get don't exist for apk)
used the Cypress image as a base and tried to install Docker (does not work because the Cypress images don't work)
used another image and tried to install both (does not work because installing Docker inside the Docker container does not work, that's why they have the image provided)
used DinD with another distro (cruizba/ubuntu-dind, fails with " dockerd is not running after max time")
So... what am I missing? Is there any way to get to the point where I can use both Cypress and DinD in the same image?
There is an image named blackholegalaxy/cypress-dind which combines DinD and Cypress.
Sadly it's really old and there is no way to update Docker to the newest version easily.

Docker run image with wrong architecture on raspberry pi

js server in docker. On my Windows 10 64bit it works fine. But when i try to run the image on my raspberry pi: standard_init_linux.go:190: exec user process caused "exec format error".
mariu5 in the docker Forum has a workaround. But I do not know what to do with it.
https://forums.docker.com/t/standard-init-linux-go-190-exec-user-process-caused-exec-format-error/49368/4
Where can I updated the deployment.template.json file and has the Raspberry Pi 3 Model B+ a arm32 architecture?
You need to rebuild your image on the raspberry pi or find one that is compatible with it.
Perhaps this example might help:
https://github.com/hypriot/rpi-node-example-hello-world
The link you posted is not a work around but rather a "one can't do that".
You have to run the docker image that was built for a particular
Architecture on a docker node that is running that same Architecture.
Take docker out of the picture for a moment. You can’t run a Linux
application compiled and built on a Linux ARM machine and run it on a
Linux amd64 machine. You can’t run a Linux application compiled and
built on a Linux Power machine on a Linux amd64 machine.
https://forums.docker.com/t/standard-init-linux-go-190-exec-user-process-caused-exec-format-error/49368/4

how to create a docker centos image that would only run on centos

I'm new to docker and I'm still trying to learn the conepts.
On my centos machine I created a test image that would include a C-compiled executable. Based on my understanding of docker my intention and my expectation was for the image to run on centos machines only. Here is my docker file:
FROM centos:7
WORKDIR /opt/MYAPPS
COPY my_hello .
CMD my_hello
The image builds and works fine on the centos machine I created.
Then I pushed this image to my repo and pulled it to another centos machine, and works correctly as well. So far so good.
As I mentioned I was expecting for this image to be limited to centos. In order to prove it I tried pulling it to other OSs, my ubuntu and my windows. To my surprise, it worked on both.
Obviously I'm missing something. Either I'm not grasping the concept of docker or I'm using the wrong "FROM" image.
As I mentioned I was expecting for this image to be limited to centos.
No: What you put in the image is merely what the dependencies for your executable to run, but in the end, everything resolves to system calls.
That means your image is limited by the kernel of the host machine, since the running process is doing system call to said kernel.
(As illustrated in "The Fascinating World of Linux System Calls")
As long as the kernel (here Linux, even on Windows, through HyperV) supports your system call, your program will run on those hosts.
See more at "How can Docker run distros with different kernels?".

I just don't understand how docker works

I've read some articles about
VM(vmware, virtualbox..) vs docker.
But I just can't understand what is going on..
There's an example of creating your own docker image.
They start with pulling ubuntu images from docker hub.
..install some stuffs in there...
django for example
and make all of it as an docker image.
Then, If you have docker installed in mac.
running that image should be like
(HOST) MAC > docker > ubuntu VM > django?
isn't it??
They say docker make it possible to run django like
MAC > docker > django image
But when you are making the image you starts with ubuntu..
and django must be ubuntu based django..
Where did I missed the point??
and some docker images like mysql ..
what is the base os of the that running mysql?
Is it possible to run that same docker image
in ubuntu / in centos / together??
how?
Don't see the "from Ubuntu" like a VM with Ubuntu, but just as the libs from Ubuntu to run the rest of the Docker image. Each container does not load an entire OS, but use its host ressources.
And see docker as a Cloud : you will have a process (a container) running something and listening on a specific port.

Resources