Fair warning: I'm new to all of this, so there there might be some mistakes in my thinking process.
I want to system test an application we are developing, and we ship this application via Docker, so that's what I want to test.
For GitLab CI, this means creating a Docker image which has Docker in Docker and Cypress, since that is what I'd like to use.
So just from checking the Docker docs I can see that Docker can be installed on a multitude of Linux distros, but not on Alpine. The official image however is Alpine.
The Cypress docs however show that Cypress can not be installed to Alpine. Only the package managers "apt-get" and "yum" are supported, which is Ubuntu and Fedora, respectively.
So as far as I can tell, it's not possible to have both of these at once? Which would be absolutely baffling (but so is the package manager chaos I just learned about).
What I tried:
used the Docker image as a base and tried to install Cypress (does not work because there is no installation manual and the packages you need to install via apt-get don't exist for apk)
used the Cypress image as a base and tried to install Docker (does not work because the Cypress images don't work)
used another image and tried to install both (does not work because installing Docker inside the Docker container does not work, that's why they have the image provided)
used DinD with another distro (cruizba/ubuntu-dind, fails with " dockerd is not running after max time")
So... what am I missing? Is there any way to get to the point where I can use both Cypress and DinD in the same image?
There is an image named blackholegalaxy/cypress-dind which combines DinD and Cypress.
Sadly it's really old and there is no way to update Docker to the newest version easily.
Related
I have a Docker file which creates an image and then I run it using docker compose together with a container built using a Postgres image. (To set up a local environment of Airflow - we use the mwaa local runner).
Recently I got a new M1 pro machine and I’m getting into issues running the container.
The problem is, from my understanding, is that the image is being built and then run using my machine which has a different kind of cpu architecture, which causes pip to look for wheels for this kind of architecture. My college has an intel Mac and he says he doesn’t experience any issues.
The build phase is ok, but when I run the container, we’ve set docker compose to run an entrypoint script that also installs some airflow providers and other dependencies, one of which is plyvel, which fails to install and cause other packages not to install as well. When I remove plyvel from the requirements.txt file, the installation completes but some of my airflow providers are missing some files or attributes which create its own issues.
I tried forcing docker to building and running the image and container using amd64 by changing the build command to:
docker build --platform linux/amd64 --rm --compress $3 -t amazon/mwaa-local:2.2 ./docker which runs but runs very slowly.
Also, added platform: linux/amd64 in docker-compose file to both the postgres and the local-runner containers.
Then, when I spin up the container, it takes a lot of time to get into a working state when I can access the airflow ui in the web browser and then it is very slow in the ui - every link is taking a few seconds to process and direct me to the new place. I believe this is due to some emulation or something.
I then found this article:
https://medium.com/nttlabs/buildx-multiarch-2c6c2df00ca2
It says there is a faster way to run without emulation but I didn’t understand how to implement.
In addition, found this Reddit thread:
https://www.reddit.com/r/docker/comments/qlrn3s/docker_on_m1_max_horrible_performance/
They suggest building and running the container inside a virtual machine, not sure if that is the way to go in my situation.
I tried both Docker Desktop and Rancher Desktop (with dockerd ) but both shows the same symptoms.
I use following module:
https://docs.ansible.com/ansible/latest/modules/docker_service_module.html?highlight=ansible%20doc
I can create and start docker container using this module. However, is it possible to execute tasks (and preserve changes) on this container?
I mean:
install some yum package
insert some bash script into container.
Could you give me some clues?
As a general rule, you don't install software on a running container. If you need a container with some software installed in it, you should build a custom image that has the software you need, and set it up so that it can do everything it needs on its own once you start it up once. (As an even broader rule, you shouldn't need to docker exec into a running container except to debug things; it definitely isn't part of the core "how to do things with containers" workflow.)
I would recommend following a standard Docker tutorial, such as Docker's official tutorial on building and running custom images. Once you have a working Docker image workflow, you'd use the Ansible docker_container module in place of the docker run command.
Can a Docker made from Ubuntu 16.04.01 run on 16.04.5? Can all versions of Ubuntu run a Docker from all other versions of Ubuntu?
Yes. It can.
The main idea of containerizing app is to package your app, dependency and all other necessary stuffs that are required to run your application into a single package. So, wherever there is docker engine, you can run your docker image. No matter what is your OS version. All the things that are require to run your app are already packaged with the image.
Look at the docker's official definition of What is a Container
I'm new to docker and I'm still trying to learn the conepts.
On my centos machine I created a test image that would include a C-compiled executable. Based on my understanding of docker my intention and my expectation was for the image to run on centos machines only. Here is my docker file:
FROM centos:7
WORKDIR /opt/MYAPPS
COPY my_hello .
CMD my_hello
The image builds and works fine on the centos machine I created.
Then I pushed this image to my repo and pulled it to another centos machine, and works correctly as well. So far so good.
As I mentioned I was expecting for this image to be limited to centos. In order to prove it I tried pulling it to other OSs, my ubuntu and my windows. To my surprise, it worked on both.
Obviously I'm missing something. Either I'm not grasping the concept of docker or I'm using the wrong "FROM" image.
As I mentioned I was expecting for this image to be limited to centos.
No: What you put in the image is merely what the dependencies for your executable to run, but in the end, everything resolves to system calls.
That means your image is limited by the kernel of the host machine, since the running process is doing system call to said kernel.
(As illustrated in "The Fascinating World of Linux System Calls")
As long as the kernel (here Linux, even on Windows, through HyperV) supports your system call, your program will run on those hosts.
See more at "How can Docker run distros with different kernels?".
I have a totally empty debian9 on which I installed docker-ce and nothing else.
My client wants me to run a website (already done locally on my PC) that he can migrate/move rapidly from one server to another moving docker images.
My idea is to install some empty docker image, and then install on it manually all dependencies (ngingrtmp, apache2, nodejs, mysql, phpmyadmin, php, etc...)
I need to install all these dependencies MANUALLY (to keep control) - not using a ready to go docker images from dockerhub, and then to create an IMAGE of ALL things I have done (including these dependencies, but also files I will upload).
Problem is : I have no idea how to start a blank image, connect to it and then save a modified image with components and dependencies I will run.
I am aware that the SIZE may be bigger with a simple dockerfile, but I need to customize lots of things such as using php5.6, apache2.2, edit some php.ini etc etc..
regards
if you don't want to define you're dependencies on the docker file then you can have an approach like this, spin up a linux container with a base image and go inside the docker
sudo docker exec -it <Container ID> /bin/bash
install your dependencies as you install on any other linux server.
sudo apt-get install -y ngingrtmp apache2 nodejs mysql phpmyadmin php
then exit the container by ctrl+p and ctrl+q and now commit the changes you made
sudo docker commit CONTAINER_ID new-image-name
run docker images command and you will see the new image you have created, then you can use/move that image
You can try with a Dockerfile with the following content
FROM SCRATCH
But then you will need to build and add the operating system yourself.
For instance, alpine linux does this in the following way:
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
Where rootfs.tar.xz is a file of less of 2MB available on alpine's github repository (version 3.7 for x86_64 arch):
https://github.com/gliderlabs/docker-alpine/tree/61c3181ad3127c5bedd098271ac05f49119c9915/versions/library-3.7/x86_64
Or you can begin with alpine itself, but you said that you don't want to depend on ready to go docker images.
A good start point for you (if you decide to use alpnie linux), could look like the one available at https://github.com/docker-library/httpd/blob/eaf4c70fb21f167f77e0c9d4b6f8b8635b1cb4b6/2.4/alpine/Dockerfile
As you can see, A Dockerfile can became very big and complex because within it you provision all the software you need for running your image.
Once you have your Dockerfile, you can build the image with:
docker build .
You can give it a name:
docker build -t mycompany/myimage:1.0
Then you can run your image with:
docker run mycompany/myimage:1.0
Hope this helps.