Dev environnement for gcc with Docker? - docker

I would like to create a minimalist dev environment for occasional developers which only need Docker.
The ecosystem would have:
code-server image to run Visual Studio Code
gcc image to build the code
git to push/commit the code
ubuntu with some modifications to run the code
I looked to docker-in-docker which could be a solution:
Docker
code-server
docker run -it -v ... gcc make
docker run -it -v ... git git commit ...
docker run -it -v ... ubuntu ./program
But it seems perhaps a bit overkill. What would be the proper way to have a full dev environment well separated, that only require Docker to be installed on the host machine (Linux, Windows, MacOS, Chromium)

I suggest using a Dockerfile.
This file specifies a few steps used to build an image.
The first line of the file specifies a base image(in your case, I would use Ubuntu):
FROM ubuntu:latest
Then, you can e.g. copy files to the image or select commands to run:
RUN apt install gcc make
RUN apt install git
and so on.
At the end, you may want to specify the program that is run when you start the container
CMD /bin/bash
Then you can build it with the command docker build -f Dockerfile -t devenv:latest. This builds a new image named devenv:latest (latest is the version) from the file Dockerfile.
Then, you can create a container from the file using docker run devenv:latest.
If you want to use this container multiple times, you could create it using docker run -it devenv:latest
If you want to, you can also use the code-server base image instead of ubuntu:latest.

Related

is it possible to create a docker image that creates and runs another docker?

I need a machine that runs Kali OS, and builds and runs some docker container.
This was easy using Virtual box.
But it was proved hard (impossible?) on docker.
So - I want to create an image, based on Kali, and then build and run some docker container. is this possible?
I wrote something like this:
FROM kalilinux/kali-rolling
RUN apt update -y
RUN apt install -y git
RUN apt install -y docker.io
RUN git clone https://something
RUN docker build . -f /something/Dockerfile -t my_app
CMD docker run my_app
A docker is a process (container) running on some environment, most probably on a local docker VM in your case. If you want to build another docker on that VM you need to install docker on it which is probably heavy cumbersome and not advisable.
If you want a Kali image (for some obscure reason) you can use ready made images.
Here
No need to create another docker.
I would suggest you read up and take a docker tutorial.

How to install qemu emulator for arm in a docker container

My goal is to build a Docker Build image that can be used as a CI stage that's capable of building a multi-archtecture image.
FROM public.ecr.aws/docker/library/docker:20.10.11-dind
# Add the buildx plugin to Docker
COPY --from=docker/buildx-bin:0.7.1 /buildx /usr/libexec/docker/cli-plugins/docker-buildx
# Create a buildx image builder that we'll then use within this container to build our multi-architecture images
RUN docker buildx create --platform linux/amd64,linux/arm64 --name=my-builder --use
^ builds the container I need, but does not include the emulator of arm64. This means when I try to use it to build a multiarchitecture image via a command like docker buildx build --platform=$SUPPORTED_ARCHITECTURES --build-arg PHP_VERSION=8.0.1 -t my-repo:latest ., I get the error:
error: failed to solve: process "/dev/.buildkit_qemu_emulator /bin/sh -c apt-get update && apt-get -y install -q ....
The solution is to run docker run --rm --privileged tonistiigi/binfmt --install arm64 as part of the CI steps, which uses the buildx container I previously built. However, I'd really like to understand why the emulator cannot seem to be installed in the container by adding something like this to the Dockerfile:
# Install arm emulator
COPY --from=tonistiigi/binfmt /usr/bin/binfmt /usr/bin/binfmt
RUN /usr/bin/binfmt --install arm64
I'd really like to understand why the emulator cannot seem to be installed in the container
Because when you perform a RUN command, the result is to capture the filesystem changes from that step, and save them to a new layer in your image. But the qemu setup command isn't really modifying the filesystem, it's modifying the host kernel, which is why it needs --privileged to run. You'll see evidence of those kernel changes in /proc/sys/fs/binfmt_misc/ on the host after configuring qemu. It's not possible to specify that flag as part of the container build, all steps run in the Dockerfile are unprivileged, without access to the host devices or the ability to alter the host kernel.
The standard practice in CI systems is to configure the host in advance, and then run the docker build. In GitHub Actions, that's done with the setup-qemu-action before running the build step.

Install package in running docker container

i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.

How to run Dockerfile or Dock Image to install the python dependencies

Sorry for basic Question, as I am new on Docker and I want to install the dependencies by using the docker file, So please guide me how to run this file on Ubuntu?
Author have written the dependencies in the Dockerfile for building the Opensfm.
GitHub Repository Link
FROM ubuntu:18.04
# Install apt-getable dependencies
RUN export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get install -y \
build-essential \
Can anyone guide me how to run the file and install the dependencies on Ubuntu?
You really should follow mchawre's advice and read the docker get-started. However, I can try to direct you into the right direction.
I want to install the dependencies by using the docker file
You have to understand that a docker file compiles to a docker image which then can be run as a docker container. You can think of a docker container as a lightweight virtual machine. With this in mind, your statement does not make sense, since you can not install dependencies for your host system (the system in which you might want to start the docker container) with the help of a docker image. This is not how docker containers are supposed to work.
Instead, the docker file allows you to create a virtualized (isolated) environment in which you can ssh (the docker way: docker exec -it <container_name> bash) and then build the respective application.
If you do not want to mess with docker at all and your systems runs something close to ubuntu:18.04, you can also manually execute the instruction from the docker file on your normal system in order to build your desired application on your system.

How to link binaries between docker containers

Is it possible to use docker to expose the binary from one container to another container?
For example, I have 2 containers:
centos6
sles11
I need both of these containers to have similar versions git installed. Unfortunately the sles container does not have the version of git that I need.
I want to spin up a git container like so:
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER spuder
RUN apt-get update
RUN apt-get install -yq git
CMD /usr/bin/git
# ENTRYPOINT ['/usr/bin/git']
Then link the centos6 and sles11 containers to the git container so that they both have access to a git binary, without going through the trouble of installing it.
I'm running into the following problems:
You can't link a container to another non running container
I'm not sure if this is how docker containers are supposed to be used.
Looking at the docker documentation, it appears that linked containers have shared environment variables and ports, but not necessarily access to each others entrypoints.
How could I link the git container so that the cent and sles containers can access this command? Is this possible?
You could create a dedicated git container and expose the data it downloads as a volume, then share that volume with the other two containers (centos6 and sles11). Volumes are available even when a container is not running.
If you want the other two containers to be able to run git from the dedicated git container, then you'll need to install (or copy) that git binary onto the shared volume.
Note that volumes are not part of an image, so they don't get preserved or exported when you docker save or docker export. They must be backed-up separately.
Example
Dockerfile:
FROM ubuntu
RUN apt-get update; apt-get install -y git
VOLUME /gitdata
WORKDIR /gitdata
CMD git clone https://github.com/metalivedev/isawesome.git
Then run:
$ docker build -t gitimage .
# Create the data container, which automatically clones and exits
$ docker run -v /gitdata --name gitcontainer gitimage
Cloning into 'isawesome'...
# This is just a generic container, but what I do in the shell
# you could do in your centos6 container, for example
$ docker run -it --rm --volumes-from gitcontainer ubuntu /bin/bash
root#e01e351e3ba8:/# cd gitdata/
root#e01e351e3ba8:/gitdata# ls
isawesome
root#e01e351e3ba8:/gitdata# cd isawesome/
root#e01e351e3ba8:/gitdata/isawesome# ls
Dockerfile README.md container.conf dotcloud.yml nginx.conf

Resources