gitlab-runner exec docker - inject gpg key - docker

I need to run gitlab-runner locally using the exec command and the docker executor.
The docker executor clones the project into the container so I start with a blank slate. In order to run the tests, I need to decrypt certain credentials files. Normally this is done on the dev machine using the developers private gpg key. But now we are in a container and I can't find a way to inject the developers gpg key into the testing container.
Normally it would make sense to pass the private key as an environment variable but the environment feature is not supported on the gitlab-runner exec command.
It would be much easier if gitlab-runner would just copy the project files into the container instead of doing a fresh clone of the project. That way the developer would decrypt the credentials on the host and everything is fine.
What are my options here?

The only option to pass in environment variables into the testing container is to use the --env parameter for gitlab-runner.
First we need to store the private key in an environment variable on our local machine. I used direnv for this but it also works manually:
export GPG_PRIVATE_KEY="$(gpg --export-secret-keys -a <KEY ID>)"
Then we can run gitlab-runner like this:
gitlab-runner exec docker test \
--env GPG_PRIVATE_KEY="$GPG_PRIVATE_KEY" \
--env GPG_PASSPHRASE="$GPG_PASSPHRASE"
Note that I also passed the passphrase in an environment variable because I need it inside the container to decrypt my data.
Now I can import the key in the docker container. The top of my .gitlab-ci.yml looks like this:
image: quay.io/mhart/alpine-node:8
before_script:
- apk add --no-cache gnupg
- echo "$GPG_PRIVATE_KEY" | gpg --batch --import --pinentry-mode loopback --no-tty
Done, now we can use that key inside the container to do what we want.
I also ran into some problems when I tried to decrypt my data. This guide was incredibly helpful and solved my issue.

It is hard for me to imagine.why you need to invoke gitlab-runner with exec but why could not you do
exec gitlab-runner sh
export GPG_KEY=...
....

Related

docker container exits out immediately with a script attached

I'm trying to add a script to a docker run command , command i'm using is :
docker run -dit --name 1.4 ubuntu sh -c 'echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;'
and then install curl , then enter a website as input and it should reply to me as per the course i'm studying , but running this exact command makes the container exit immediately
any guidance on why would that be ?
also how should i send the input to the container so it can use it afterwards , do i just attach to it after installing curl in the terminal ?
I'm going to recommend an extremely different workflow from what you suggest. Rather than manually installing software and trying to type arguments into the stdin of a shell script, you can build this into a reusable Docker image and provide its options as environment variables.
In comments you describe a workflow where you first start a container, then get a debugging shell inside of it, and then install curl. Unless you're really truly debugging, this is a pretty unusual workflow: anything you install this way will get lost as soon as the container exits, and you'll have to repeat this step every time you re-run the container. Instead, create a new empty directory, and inside that create a file named Dockerfile (exactly that name, no extension, capital D) containing
# Start our new image from this base
FROM ubuntu
# Install any OS-level packages we need
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \ # avoid post-installation questions
apt-get install \
--no-install-recommends \ # don't install unneeded extra packages
--assume-yes \ # (-y) skip an "are you sure" prompt
curl
Rather than try to read from the container's input, you can take the URL as an environment variable. In most cases the best way to give the main command to a container is by specifying it in the Dockerfile. You can imagine running a larger script or program here as well, and it would take the some environment-variable setting (using Python's os.environ, Node's process.env, Ruby's ENV, etc.).
In our case, let's make the main container command be the single curl command that you're trying to run. We haven't specified the value of the environment variable yet, and that's okay: this shell command isn't evaluated until the container actually runs.
# at the end of the Dockerfile
CMD curl "$website"
Now let's build and run it. When we do launch the container, we need to provide that $website environment variable value, which we can do with a docker run -e option.
# Build the image:
docker build \
-t my/curl # giving it a name
. # using the content in the current directory
docker run \
--rm # deleting the container when done
-e website=https://stackoverflow.com \
my/curl # with the same name as above
So note that we're starting the container in the foreground (no -d option) since we want to see its output and we expect it to exit promptly; we're cleaning up the container when it's done; we're not trying to pass a full shell script as a command-line argument; and we are providing our options on the command line, so we don't need to make the container's stdin work (no -i or -t option).
A Docker container is a wrapper around a single process. When that process exits, the container exits too. In this example, the thing you want the container to do is run a curl command; that's not a long-running process, hence docker run --rm but not -d. There's not an "afterwards" here, if you need to query a different Web site then launch a new container. It's very normal to destroy and recreate containers, especially since there are many options that can only be specified when you first start a container.
With the image and container we've built here, in fact, it's useful to think about them as analogous to the /usr/bin/curl binary on your host. You build it once into a reusable artifact (here the Docker image), and you run multiple instances of it (curl commands or new Docker containers) giving options on the command line at startup time. You do not typically "get a shell" inside a curl command-line invocation, and I'd similarly avoid docker exec outside of debugging tasks.
You can also use alpine/curl image to use curl command without needing to install anything.
First start the container in detached mode with -d flag.
Then run your script with exec sub command.
docker run -d --name 1.4 alpine/curl sleep 600
docker exec -it 1.4 sh -c 'echo "Input website:"; read website; echo "Searching.."; sleep 1; curl http://$website;'

Jenkins SSH Username with Private key sent to script which calls docker container

Currently we have many Jenkins job which use the username and password credentials variables, which are then passed into a docker container as environment variables by a shell script. The docker container then PULLS source code from bit bucket using these credentials and performs the builds. This is working great, but now we have to switch over to use SSH Username with Key.
I've set these credentials up in Jenkins and its pulling down the source code, attempting to trigger the docker build but getting stuck here as I cant seem to send those credentials then over to the docker container.
Is anyone able to provide some guidance?
Within the Jenkins job is a shell script which basically runs the below:
docker run --rm \
-e JOB="build-release" \
-e LOG_FOLDER=$logDirectory \
-v $logDirectory:$logDirectory \
-e TEMP_FOLDER=$tempFolder \
-v $tempFolder:$tempFolder \
-e BRANCH_NAME=$branch \
-v /var/run/docker.sock:/var/run/docker.sock \
dtr.com/test/checkout-build
There were previously bitbucket username and password variables passed above which had been removed.
The error being thrown within the docker container is:
'Please make sure you have the correct access rights
and the repository exists.
Host key verification failed.'
I had assumed by mapping the /var/run/docker.sock, that the docker deamon would be able to communicate with the machine hosting the docker, and use its ssh keys to access bitbucket, but I guess not :(

How to set docker env file that inside the image

i am a totally docker newb, so sorry for that
i have stand-alone docker image (some node app),
that i want to run in different environments.
i want to set up the env file with run RUN --env-file <path>
How ever, i want to use the env files that inside the image (so i can use different files per env),
and not on server.
so would be the path inside image.
is there any way to do so?
perhaps like "cp" (docker cp [OPTIONS] CONTAINER:<path>)
but doesn't seem to work.
what the best practice here?
am i making sense?
Thanks!!
Docker bind mounts are a fairly effective way to inject configuration files like this into a running container. I would not try to describe every possible configuration in your built image; instead, let that be configuration that's pushed in from the host.
Pick some single specific file to hold the configuration. For the sake of argument, let's say it's /usr/src/app/env. Set up your application however it's built to read that file at startup time. Either make sure the application can still start up if the file is missing, or build your image with some file there with reasonable default settings.
Now when you run your container, it will always read settings from that known file; but, you can specify a host file that will be there:
docker run -v $PWD/env.development:/usr/src/app/env myimage
Now you can locally have an env.development that specifies extended logging and a local database, and an env.production with minimal logging and pointing at your production database. If you set up a third environment (say a shared test database with some known data in it) you can just run the container with this new configuration, without rebuilding it.
Following is the command to run docker
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Example
docker run --name test -it debian
focus on following switch
--env , -e Set environment variables
--env-file You can pass environment variables to your containers with the -e flag.
An example from a startup script:
sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \
--name container_name dockerhub_id/image_name
In case, you have many environment variables and especially if they're meant to be secret, you can use an env-file:
$ docker run --env-file ./env.list ubuntu bash
The --env-file flag takes a filename as an argument and expects each
line to be in the VAR=VAL format, mimicking the argument passed to
--env. Comment lines need only be prefixed with #

set environment variable in running docker contianer

I need to set environment variable in a running docker container. I am already aware of the way of setting environment variable while creating a container. As far I found there is no available straight forward way to do this with docker and docker is planning to add something with new version 1.13.
But I found that some people able to manage it which is not working for me now. I tried following ways but did not work for me-
docker exec -it -u=root test /bin/bash -c "export port=8090"
echo "export port=8090" to /etc/bash.bashrc using a script and then source it
docker exec -it test /bin/bash -c "source /etc/bash.bashrc"
configuring the whole thing in a script and run it from host also did not work. While running script from host all the other command successfully executes except "export port=8090" or "source /etc/bash.bashrc" or "source /root/.bashrc".
Can anyone explain why sourcing file from host does not work in docker container even when I set user("-u=root")? Can anyone help me to solve this? When I source the file from inside the container it works perfectly. But in my case I have to do it from host machine
NOTE:, I am using docker 1.12 and tried the above in ubuntu:16.04 and ubuntu:14.04
If you have a running process in the docker and you are attempting to change the environment variable in the docker so the running process will dynamically change - this will not work. The environment variables of a process are set when it starts. You can see here ways to overcome that, but I don't think that is the right way to go.
I would instead, have a configuration file that the file reads (or listens to) periodically. And when you want to change the configuration change the file.
If this isn't your scenario, please describe your scenario so we can better assist you.
I find a way to provide environment variable to a running container. Fist upgrade your docker-engine. I am using V1.12.5.
create a script with environment variables-
#!/bin/bash
echo "export VAR1=VAL1
export VAR2=VAL2" >> /etc/bash.bashrc
source /etc/bash.bashrc
Now start a container. Here, 'test' is the container name:
docker run -idt --name=test ubuntu
Copy your script to container:
docker cp script.sh test:/
Run the script :
docker exec -it test /bin/bash -c "/script.sh"
Restart your container:
docker restart test
Go to container shell
docker exec -it test /bin/bash
Check the variable
echo $VAR1

Workaround to docker run "--env-file" supplied file not being evaluated as expected

My current setup for running a docker container is on the lines of this:
I've got a main.env file:
# Main
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
In my service file (upstart), I source this file . /path/to/main.env
I then call docker run with multiple -e for each of the environment variables I want inside of the container. In this case I would call something like: docker run -e MONGODB_URL=$MONGODB_URL ubuntu bash
I would then expect MONGODB_URL inside of the container to equal mongodb://localhost:27017/development. Notice that in reality echo localhost is replaced by a curl to amazon's api for an actual PRIVATE_IP.
This becomes a bit unwieldy when you start having more and more environment variables you need to give your container. There is a fine point to see here which is that the environment variables need to be resolved at run time, such as with a call to curl or by referring to other env variables.
The solution I was hoping to use is:
calling docker run with an --env-file parameter such as this:
# Main
PRIVATE_IP=\`echo localhost\`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
Then my docker run command would be significantly shortened to docker run --env-file=/path/to/main.env ubuntu bash (keep in mind usually I've got around 12-15 environment variables.
This is where I hit my problem which is that inside the container none of the variables resolve as expected. Instead I end up with:
PRIVATE_IP=`echo localhost`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
I could circumvent this by doing the following:
Sourcing the main.env file.
Creating a file containing just the names of the variables I want (meaning docker would search for them in the environment).
Then calling docker run with this file as an argument to --env-file. This would work but would mean I would need to maintain two files instead of one, and really wouldn't be that big of an improvement of the current situation.
What I would prefer is to have the variables resolve as expected.
The closest question to mine that I could find is:
12factor config approach with Docker
Ceate a .env file
example: test=123 val=Guru
Execute command
docker run -it --env-file=.env bash
Inside the bash verify using
echo $test (should print 123)
Both --env and --env-file setup variables as is and do not replace nested variables.
Solomon Hykes talks about configuring containers at run time and the the various approaches. The one that should work for you is to volume mounting the main.env from host into the container and sourcing it.
So I just faced this issue as well, what solved it for me was I specified the --env-file or -e KEY=VAL before the name of the container image. For example
Broken:
docker run my-image --env-file .env
Fixed:
docker run --env-file .env my-image
creating an ENV file that is nothing more than key/value pairs can be processed in normal shell commands and appended to the environment. Look at the bash -a pragma.
What you can do is create a startup script that can be run when the container starts. So if your current docker file looks something like this
From ...
...
CMD command
Change it to
From ...
...
ADD start.sh start.sh
CMD ["start.sh"]
In your start.sh script do the following:
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
command
I had a very similar problem to this. If I passed the contents of the env file to docker as separate -e directives then everything ran fine however if I passed the file using --env-file the container failed to run properly.
Turns out there were some spurious line endings in the file (I had copied from windows and ran docker in Ubuntu). When I removed them the container ran the same with --env or --env-file.
I had this issue when using docker run in a separate run script run.sh file, since I wanted the credentials ADMIN_USER and ADMIN_PASSWORD to be accessible in the container, but not show up in the command.
Following the other answers and passing a separate environment file with --env or --env-file didn't work for my image (though it worked for the Bash image). What worked was creating a separate env file...
# env.list
ADMIN_USER='username'
ADMIN_PASSWORD='password'
...and sourcing it in the run script when launching the container:
# run.sh
source env.list
docker run -d \
-e ADMIN_USER=$INFLUXDB_ADMIN_USER \
-e ADMIN_PASSWORD=$INFLUXDB_ADMIN_PASSWORD \
image_repo/name:tag

Resources