Export environment variables from docker to host - docker

I am running a docker container on Jenkins. I can't install anything on jenkins, so I did some processing on docker and want to get the results out to the host. If I set an environment variable in the docker container, how do I extract it to my jenkins host?
I can see that I can write the env variable to a file and copy it to the host, but is there another way ?

When running the container, you can mount a file or a folder into the container. Inside your container, you can write to this file and have the changed reflected on the host file on the machine.
To do that create a file result.txt on the host machine and when running the container, specify the -v option to mount the file.
docker run -v ./result.txt:/result.txt ...
And let the jenkins job write the results into this file.

Related

Run command on jenkins docker container to copy file to host

Is there a way to copy file from docker container to the host by running the command on the container itself?
I know that I can use "volume" but it will not work - I want to copy files from container to arbitrary places on the host.
Only SCP file vis SSH?
From: Copying files from Docker container to host you can run:
docker cp <containerId>:/file/path/within/container /host/path/target

Get data from container in a volume

I have a Docker container within a Tomcat server. I would like to access to the conf files of the server from the host.
In this way, i tried to run my container with these options :
docker run -v /home/empty_dir:/usr/local/tomcat/conf test
The /conf directory is well created on the host but is empty and thus the server cannot start...
So, I'm looking for a solution to populate my dir with the default conf file of the server in the container.

how to configure Cassandra.yaml which is inside docker image of cassandra at /etc/cassandra/cassandra.yaml

I am trying to edit cassandra.yaml which is inside docker container at /etc/cassandra/cassandra.yaml, I can edit it from logging inside the container, but how can i do it from host?
Multiple ways to achieve this from host to container. You can simple use COPY or RUN in Dockerfile or with basic linux commands as sed, cat, etc. to place your configuration into the container. Another way you can pass environment variables while running your cassandra image which will pass those environment variables to the spawning container. Also, can use the docker volume mount it from host to container and you can map the configuration you want into the cassandra.yaml as shown below,
$ docker container run -v ~/home/MyWorkspace/cassandra.yaml:/etc/cassandra/cassandra.yaml your_cassandra_image_name
If you are using Docker Swarm then you can use Docker configs to externally store the configuration files(Even other external services can be used as etcd or consul). Hope this helps.
To edit cassandra.yaml :
1) Copy your file from your Docker container to your system
From command line :
docker ps
(To get your container id)
Then :
docker cp your_container_id:\etc\cassandra\cassandra.yaml C:\Users\your_destination
Once the file copied you should be able to see it in your_destination folder
2) Open it and make the changes you want
3) Copy your file back into your Docker container
docker cp C:\Users\your_destination\cassandra.yaml your_container_id:\etc\cassandra
4) Restart your container for the changes to be effective

How to get an artifact generated from a container?

I have a Dockerfile dedicated to run my unit test, but i am not sure how i am supposed to get the coverage directory it generates (inside the container).
I would like to be able to get it as an artifact to be able to analyze it, but is it possible since it is generated from the container?
Use docker cp command
If you want to copy the /tmp/foo directory from a container to the existing /tmp directory on your host. If you run docker cp in your ~ (home) directory on the local host:
$ docker cp container_name:tmp/foo /tmp
Docker creates a /tmp/foo directory on your host.
If your container dies after executing you can map a volume from you host to the container, in this way you will have your data in your host after the container dies.
VOLUME ["/home/data"]
This will map /home/data in your machine with /home/data in your container, adjust at will.
More info
https://docs.docker.com/engine/reference/builder/#notes-about-specifying-volumes

Run a script into a container then copy files to host from the container

I want to run a script against a container and copy the output files back to the host. I have few questions:
Does the script need to be inside the container in order to run OR I can have the script in the host and still can run it against the container?
Copying files is available through cp command which only available in docker. Now in the container 'docker cp' is not available. So if the script is running inside the container how it can copy files to the host?
What I am trying to do is the following (my running container has mongodb):
Export certain collections to json files
Copy the resulted files to the host
As you can see some commands are available in the container such as 'mongoexport' and some are available in the host only like 'docker cp'.
Simply use a Docker volume—this is the best way to share data between containers and their host.

Resources