Run docker inside docker to avoid installing multiple dependencies - docker

I'm in front of one dilemma that I'd like to discuss here to open a constructive discussion.
My use case is very simple:
I need to run a bash script which execute several commands like install npm, execute aws-cli and query PostgreSQL. For this last task I use psql. Easy task I'd say however Docker slightly complicate the situation.
The problem would be solved if I would create an image where I'd install all the dependencies. However the result would be a pretty big image and I'd not go with this solution.
What about to run the script with one Docker image and then from the script (inside Docker) run something like
docker run postgres:9.6.3-alpine psql
docker run node:9.8 npm
In other words would be to run docker inside docker. What do you think?

If you want to execute docker run inside a docker, just execute first docker run with -v /var/run/docker.sock:/var/run/docker.sock option.
With this, you can access in containers to docker images defined in host or wherever.

Related

How to test a Dockerfile with minimal overhead

I'm trying to learn how to write a Dockerfile. Currently my strategy is as follows:
Guess what commands are correct to write based documentation.
Run sudo docker-compose up --build -d to build a docker container
Wait ~5 minutes for my anaconda packages to install
Find that I made a mistake on step 15, and go back to step 1.
Is there a way to interactively enter the commands for a Dockerfile, or to cache the first 14 successful steps so I don't need to rebuild the entire file? I saw something about docker exec but it seems that's only for running containers. I also want to try and use the same syntax as I use in the dockerfile (i.e. ENTRYPOINT and ENV) since I'm not sure what the bash equivalent is/if it exists.
you can run docker-compose without the --build flag that way you don't have to rebuild the image every time, although as you are testing the Dockerfile i don't know if you have much options here; the docker should cache automatically the builds but only if there's no changes from the last time that you made a build, and there's no way to build a image interactively, docker doesn't work like that, lastly, the docker exec is just to run commands inside the container that was created from the build.
some references for you: docker cache, docker file best practices

Use Docker to Distribute CLI Application

I'm somewhat new to Docker. I would like to be able to use Docker to distribute a CLI program, but run the program normally once it has been installed. To be specific, after running docker build on the system, I need to be able to simply run my-program in the terminal, not docker run my-program. How can I do this?
I tried something with a Makefile which runs docker build -t my-program . and then writes a shell script to ~/.local/bin/ called my-program that runs docker run my-program, but this adds another container every time I run the script.
EDIT: I realize is the expected behavior of docker run, but it does not work for my use-case.
Any help is greatly appreciated!
If you want to keep your script, add the remove flag --rm to the docker run command. The remove flag removes the container automatically after the entry-point process has exit.
Additionally, I would personally prefer an alias for this. Simply add something like this example alias my-program="docker run --rm my-program" to your ~/.bashrc or ~/.zshrc file. This even has the advantage that all other parameters after the alias (my-program param1 param2) are automatically forwarded to the entry-point of your image without any additional effort.

Is there a point in Docker start?

So, is there a point in the command "start"? like in "docker start -i albineContainer".
If I do this, I can't really do anything with the albine inside the container, I would have to do a run and create another container with the "-it" command and "sh" after (or "/bin/bash", don't remember it correctly right now).
Is that how it will go most of the times? delete and rebuilt containers and do the command "-it" if you want to do stuff in them? or would it more depend on the Dockerfile, how you define the cmd.
New to Docker in general and trying to understand the basics on how to use it. Thanks for the help.
Running docker run/exec with -it means you run the docker container and attach an interactive terminal to it.
Note that you can also run docker applications without attaching to them, and they will still run in the background.
Docker allows you to run a program (which can be bash, but does not have to be) in an isolated environment.
For example, try running the jenkins docker image: https://hub.docker.com/_/jenkins.
this will create a container, without you having attach to it, and you would still be able to use it.
You can also attach to an existing, running container by using docker exec -it [container_name] bash.
You can also use docker logs to peek at the stdout of a certain docker container, without actually attaching to its shell interactively.
You almost never use docker start. It's only possible to use it in two unusual circumstances:
If you've created a container with docker create, then docker start will run the process you named there. (But it's much more common to use docker run to do both things together.)
If you've stopped a container with docker stop, docker start will run its process again. (But typically you'll want to docker rm the container once you've stopped it.)
Your question and other comments hint at using an interactive shell in an unmodified Alpine container. Neither is a typical practice. Usually you'll take some complete application and its dependencies and package it into an image, and docker run will run that complete packaged application. Tutorials like Docker's Build and run your image go through this workflow in reasonable detail.
My general day-to-day workflow involves building and testing a program outside of Docker. Once I believe it works, then I run docker build and docker run, and docker rm the container once I'm done. I rarely run docker exec: it is a useful debugging tool but not the standard way to interact with a process. docker start isn't something I really ever run.

Running mongodump from within a docker container

I need to be able to run the mongodump utility from within a docker container because the developers on my team do not have the utilities installed locally. So, I created a node script, export.js that handles running the tool and also zips the output.
Ideally what I want is to be able to run an npm script:
{
"db:export": "docker build -t some-local-container -f docker-images/Dockerfile.export && docker run some-local-container"
}
some-local-container would have access to the node_modules and the export.js script I would like to run. It would also run this script as the default ENTRY. Obviously, it was built with both node and mongo installed.
My question is vague, but, is there an easier way to do this? I feel like this is overkill for what I want to do. This is a development only script so it didn't seem to make sense living within the Dockerfile for our Mongo instance.

is it possible to wrap an entire ubuntu 14 os in a docker image

I have a Ubuntu 14 desktop, on which I do some of my development work.
This work mainly revolves around Django & Flask development using PyCharm
I was wandering if it was possible to wrap the entire OS file system in a Docker container, so my whole development environment, including PyCharm and any other tools, would become portable
Yes, this is where Docker shines. Once you install Docker you can run:
docker run --name my-dev -it ubuntu:14.04 /bin/bash
and this will put you, as root, inside a Docker container's bash prompt. It is for all intents and purposes the entire os without anything extra, you will need to install the extras, like pycharm, flask, django, etc. Your entire environment. The environment you start with has nothing, so you will have to add things like pip (apt-get install -y python-pip), and other goodies. Once you have your entire environment you can exit (with exit, or ^D) and you will be back in your host operating system. Then you can commit :
docker commit -m 'this is my development image' my-dev my-dev
This takes the Docker image you just ran (and updated with changes) and saves it on your machine with the tag my-dev:v1, any time in the future you can run this again using the invocation:
docker run -it my-dev /bin/bash
Building a Docker image like this is harder, it is easier once you learn how to make a Dockerfile that describes the base image (ubuntu:14.04) and all of the modifications you want to make to it in a file called Dockerfile. I have an example of a Dockerfile here:
https://github.com/tacodata/pythondev
This builds my python development environment, including git, ssh keys, compilers, etc. It does have my name hardcoded in it, so, it won't help you much doing development (I need to fix that). Anyway, you can download the Dockerfile, change it with your details in it, and create your own image like this:
docker build -t my-dev -< Dockerfile
There are hundreds of examples on the Docker hub which is where I started with mine.
-g

Resources