Ansible docker module missing CP command? - docker

The docker client offers the cp sub-command as explained here, which is very handy when one needs to copy a file into a container (note: this is somewhat analogous to Dockerfile ADD instruction in image building). In Docker 1.8 the cp command has been even expanded a bit.
However, reading the Ansible docker module documentation, it appears that this is missing? Here are my 2 questions:
Did I misunderstand the Ansible documentation?
if Ansible is missing the cp thing, has anyone found a workaround? I can think of something like using Ansible copy module to transport the files to the remote machine first, and then run there the native docker client with cp, but ideally Ansible's docker module would have done this in a single shot as part of the docker module?
Thanks in advance.

You can also use the synchronize command ~ examples provided in this link:
http://opensolitude.com/2015/05/26/building-docker-images-with-ansible.html

Using ansible shell module helped:
- name: copy db dump to localhost
ansible.builtin.shell: docker cp container:/tmp/dump.sql /tmp/dump.sql

You didn't. Ansible's docker module does not support copying files or folders from a container.
There is no simple way to do so. I mean you need a hack. Off the top of my head you can play with '-' argument for docker cp option.
However from my point of view if you wish to copy something into a container you're probably doing something wrong. Containers should be ephemeral.

Related

Docker container copy files from local path into container

I need to copy my customized keycloak themes into keycloak container to use it like mention here:
https://medium.com/#auscunningham/change-login-theme-in-keycloak-docker-image-55b5fa5ceec4
After identifying my container id: docker container ls and making a list of files like this: docker exec 7e3a420017a8 ls ./keycloak/themes
It returns the list of themes correctly, but using this to copy my files from local to container:
docker cp ./mycustomthem 7e3a420017a8:/keycloak/themes/
or
docker cp ./mycustomthem 7e3a420017a8:./keycloak/themes/
I get the following error:
Error: No such container:path: 7e3a420017a8:/keycloak
I cannot imagine where the error is, since I can list the files into the folder and container, could you help me?
Thank you in advance.
Works on my computer.
docker cp mycustomthem e67f76e8740b:/opt/jboss/keycloak/themes/raincatcher-theme
You have added the wrong path in command add full path /opt/jboss/keycloak/themes/raincatcher-theme.
This seems like a weird way to approach this problem. Why not just have a Dockerfile that uses the Keycloak container as the base image and then copies the theme into the container at build time? Then just run the image you build? This will also be a more stable pattern in the long term if you ever decide to add any plugins or customizations and it provides an easy upgrade path to new versions by just changing the base image in your Dockerfile.
Update according to your new question update:
Try the following:
docker cp ./mycustomthem 7e3a420017a8:/opt/jboss/keycloak/themes/
The correct path in Keycloak is actually /opt/jboss/keycloak/themes/

Is it possible to specify a custom Dockerfile for docker run?

I have searched high and low for an answer to this question. Perhaps it's not possible!
I have some Dockerfiles in directories such as dev/Dockerfile and live/Dockerfile. I cannot find a way to provide these custom Dockerfiles to docker run. docker build has the -f option, but I cannot find a corresponding option for docker run in the root of the app. At the moment I think I will write my npm/gulp script to simply change into those directories, but this is clearly not ideal.
Any ideas?
You can't - that's not how Docker works:
The Dockerfile is used with docker build to build an image
The resulting image is used with docker run to run a container
If you need to make changes at run-time, then you need to modify your base image so that it can take, e.g. a command-line option to docker run, or a configuration file as a mount.

How to access or pass host file to Docker Python script

I am using Docker to containerize a Python script. If Docker wasn't in the picture, I would want pass a file path to the script, which would proceed to work on that file.
python coolscript.py data.csv
As a Docker novice, I'm not sure sure how to accomplish this. Currently, I am automatically executing the script when the container launches.
docker run coolcontainer python coolscript.py data.csv
Since the data.csv file path isn't known when the image is built, its not imported into the container and I cant seem to access it. I've seen some forums saying to mount the host filesystem, but that seems overkill since I just want one file. Is there a way to just send that one file into the container at runtime? How would you be architecting this?
The -v option for bind mounts then should do the trick:
docker container run -v /my/host/path:/my/container/path coolcontainer python /my/container/path/coolscript.py /my/container/path/data.csv
Place both files in /my/host/path

For dummies approach to build image and run own code on Docker

I am trying to work with Docker. I want to run a supersimple program on Docker (to get acquainted with Docker).
I have gone through most of Dockers own tutorials, but did work with own code any where so I am left puzzled. When searching online there are a lot of hits (which i have attempted to understand), but most of them either involve more unknown tools (maven, springboot, django) or they are far too complicated.
Say i have a helloworld.py (or helloworld.java). How do i do go about running it on Docker? * by running i mean upload and execute.
Do i need to download java on docker? what sequence of steps are needed?
I know this is a "stupid" question, which is why i specified a dummies-approach.
Any help will greatly be appreciated. Even links that cover this (which i have not succeded in finding)
This is a basic image for running a "hello world" example in Python:
You have to create these two files in a folder.
Dockerfile:
FROM python:2
COPY ./helloworld.py /
CMD ["/usr/bin/python", "/helloworld.py"]
helloworld.py:
print "hello world"
Look for the Dockerfile reference to understand what FROM, COPY and CMD do.
First you build the container:
docker build -t hellopython <path-of-image-folder>
Verify that the image is listed:
docker images
Run a new container:
docker run hellopython
Use ps to list the containers:
docker ps -a

Is it possible to use a "blank" docker container without any install on it?

I'm new to Docker and I think having understood that Docker is a Software virtualization tool (by opposition to OS virtualization). I understand, by this image, that Docker provides a very blank environment with a given file structure and is executing on the kernel Host. What we need to do is to put our application and its dependencies (with no OS) to have a very light portable container of our app.
But it seems there is a dark side of Docker : each Dockerfile begins with a "FROM ".
I saw this and this but I'm not sure to understand. It sounds that Docker is near an kind of simplified OS virtualizer.
I was interesting in the advantage of images size. But if we have to install an OS on each image my "portable" application will be quite heavy quickly.
Is there really no way to use a "blank image" ?
You can start with FROM scratch which is an empty filesystem.
Please see the section on Creating a Base Image if you'd like to spin up your own minimal root file system.
You might be surprised how many dependencies your application actually has on the root file system, and in the end, it is usually more efficient to use one of the standard root file systems in your FROM statement, as Charles Duffy commented above.
empty/Dockerfile
FROM scratch
WORKDIR /
build and check size
docker build empty/ -t empty
docker images | grep empty
This may be a bit too late. But I just had a use case where I needed to create a bare bone container that I could launch as part of multi-container docker-compose and get into it afterwards via /bin/bash. Keep in mind, a docker container must run a service and the container will be in existence only for as long as the service is running. So, I created this container with just python in it. I copied a 2 line python script that just makes it sleep. Here's what I did.
1. Create the python script wait_service.py with the following code:
import time
time.sleep(1000)
2. Create the Dockerfile with just the following lines:
FROM python:2.7
RUN mkdir -p /test
WORKDIR /test
COPY wait_service.py /test/
CMD python wait_service.py
3. Build and run the container. Using the container id, I could then get inside it. Please adjust the sleep time based on how long you want to keep this container.
Your application haveto have some underlying OS, without, there is no way for it to start..
I think the most basic one in the docker index is busybox, so a FROM busybox will give you a very minimal setup.
Docker is also using a lot of caching for each of its layers. So every docker container that uses FROM centos:centos7 at the top will only use 1 single set of minimal centos7 image.
The base images are very minimalistic, so it is nothing to worry about..

Resources