How can I use a dredd docker image interactively? - docker

I would like to use this docker container apiaryio/dredd instead of the npm package dredd. I am not familiar with running and debugging npm based docker images. How can I run the basic usage example of the npm package "Quick Start" section
$ dredd init
$ dredd
if I have a Swagger file instead of the api-description.apib in $PWD/api/api.yaml or $PWD/api/api.json?

TL;DR
Run dredd image as a command line. Dredd Image at Docker Hub
docker run -it -v $PWD:/api -w /api apiaryio/dredd init
[Optional] Turn it into a script:
#!/bin/bash
echo '***'
echo 'Root dir is /api'
export MYIP=`ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'`
echo 'Host ip is: ' $MYIP
echo 'Configure URL of tested API endpoint: http://api-srv::<enpoint-port>. Set api-srv to point to your server.'
echo 'This script will set api-srv to docker host machine - ' $MYIP
echo '***'
docker run -it --add-host "api-srv:$MYIP" -v $PWD:/api -w /api apiaryio/dredd dredd $1
[Optional] And put this script in a folder that is in your PATH variable and create an alias for short it
alias dredd='bash ./scripts/dredd.sh'
Code at Github gist.

Related

Mounted the AWS CLI credentials as volume to docker container however still credentials are not being referred

I have created a docker image using AmazonLinux:2 base image in my Dockerfile. This docker container will run as Jenkins build agent on a Linux server and has to make certain AWS API calls. In my Dockerfile, I'm copying a shell-script called assume-role.sh.
Code snippet:-
COPY ./assume-role.sh .
RUN ["chmod", "+x", "assume-role.sh"]
ENTRYPOINT ["/assume-role.sh"]
CMD ["bash", "--"]
Shell script definition:-
#!/usr/bin/env bash
#echo Your container args are: "${1} ${2} ${3} ${4} ${5}"
echo Your container args are: "${1}"
ROLE_ARN="${1}"
AWS_DEFAULT_REGION="${2:-us-east-1}"
SESSIONID=$(date +"%s")
DURATIONSECONDS="${3:-3600}"
#Temporary loggings starts here
id
pwd
ls .aws
cat .aws/credentials
#Temporary loggings ends here
# AWS STS AssumeRole
RESULT=(`aws sts assume-role --role-arn $ROLE_ARN \
--role-session-name $SESSIONID \
--duration-seconds $DURATIONSECONDS \
--query '[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]' \
--output text`)
# Setting up temporary creds
export AWS_ACCESS_KEY_ID=${RESULT[0]}
export AWS_SECRET_ACCESS_KEY=${RESULT[1]}
export AWS_SECURITY_TOKEN=${RESULT[2]}
export AWS_SESSION_TOKEN=${AWS_SECURITY_TOKEN}
echo 'AWS STS AssumeRole completed successfully'
# Making test AWS API calls
aws s3 ls
echo 'test calls completed'
I'm running the docker container like this:-
docker run -d -v $PWD/.aws:/.aws:ro -e XDG_CACHE_HOME=/tmp/go/.cache arn:aws:iam::829327394277:role/myjenkins test-image
What I'm trying to do here is mounting .aws credentials from host directory to the volume on container at root level. The volume mount is successful and I can see the log outputs as describe in its shell file :-
ls .aws
cat .aws/credentials
It tells me there is a .aws folder with credentials inside it in the root level (/). However somehow, AWS CLI is not picking up and as a result remaining API calls like AWS STS assume-role is getting failed.
Can somebody please suggest me here?
[Output of docker run]
Your container args are: arn:aws:iam::829327394277:role/myjenkins
uid=0(root) gid=0(root) groups=0(root)
/
config
credentials
[default]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXXXP
aws_secret_access_key = e8SYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYxYm
Unable to locate credentials. You can configure credentials by running "aws configure".
AWS STS AssumeRole completed successfully
Unable to locate credentials. You can configure credentials by running "aws configure".
test calls completed
I found the issue finally.
The path was wrong while mounting the .aws volume to the container.
Instead of this -v $PWD/.aws:/.aws:ro, it was supposed to be -v $PWD/.aws:/root/.aws:ro

How to setup google cloud Cloudbuild.yaml to replicate a jenkins job?

I have the following script thats run in my jenkins job
set +x
SERVICE_ACCOUNT=`cat "$GCLOUD_AUTH_FILE"`
docker login -u _json_key -p "${SERVICE_ACCOUNT}" https://gcr.io
set -x
docker pull gcr.io/$MYPROJECT/automation:master
docker run --rm --attach STDOUT -v "$(pwd)":/workspace -v "$GCLOUD_AUTH_FILE":/gcloud-auth/service_account_key.json -v /var/run/docker.sock:/var/run/docker.sock -e "BRANCH=master" -e "PROJECT=myproject" gcr.io/myproject/automation:master "/building/buildImages.sh" "myapp"
if [ $? -ne 0 ]; then
exit 1
fi
I am now trying to do this in cloudbuild.yaml such that I can run my script using my own automation image (which has a bunch of dependencies docker/jdk/pip etc installed) , and mount my git folders in my workspace directory
I tried putting my cloudbuild.yaml at the top level in my directory in my git repo and set it up as this
steps:
- name: 'gcr.io/myproject/automation:master'
volumes:
- name: 'current-working-dir'
path: /mydirectory
args: ['bash', '-c','/building/buildImages.sh', 'myapp']
timeout: 4000s
But this gives me errors saying the
invalid build: Volume "current-working-dir" is only used by one step
Just FYI, my script buildImages.sh, copies folders and dockerfiles, runs pip install/ npm/ and gradle commands and then docker build commands (kind of all in one solution).
Whats the way to translate my script to cloudbuild.yaml
try this in your cloudbuild.yaml:
steps:
- name: 'gcr.io/<your-project>/<image>'
args: ['sh','<your-script>.sh']
using this I was able to pull the image from Google Cloud Registry that has my script, then run the script using 'sh'. It didn't matter where the script is. I'm using alpine in my Dockerfile as base image.

Is it possible to add an installer, run it and delete it during one build step in Docker?

I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit

How to workaround "the input device is not a TTY" when using grunt-shell to invoke a script that calls docker run?

When issuing grunt shell:test, I'm getting warning "the input device is not a TTY" & don't want to have to use -f:
$ grunt shell:test
Running "shell:test" (shell) task
the input device is not a TTY
Warning: Command failed: /bin/sh -c ./run.sh npm test
the input device is not a TTY
Use --force to continue.
Aborted due to warnings.
Here's the Gruntfile.js command:
shell: {
test: {
command: './run.sh npm test'
}
Here's run.sh:
#!/bin/sh
# should use the latest available image to validate, but not LATEST
if [ -f .env ]; then
RUN_ENV_FILE='--env-file .env'
fi
docker run $RUN_ENV_FILE -it --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
Here's the relevant package.json scripts with command test:
"scripts": {
"test": "mocha --color=true -R spec test/*.test.js && npm run lint"
}
How can I get grunt to make docker happy with a TTY? Executing ./run.sh npm test outside of grunt works fine:
$ ./run.sh npm test
> yaktor#0.59.2-pre.0 test /app
> mocha --color=true -R spec test/*.test.js && npm run lint
[snip]
105 passing (3s)
> yaktor#0.59.2-pre.0 lint /app
> standard --verbose
Remove the -t from the docker run command:
docker run $RUN_ENV_FILE -i --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
The -t tells docker to configure the tty, which won't work if you don't have a tty and try to attach to the container (default when you don't do a -d).
This solved an annoying issue for me. The script had these lines:
docker exec **-it** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
mutt -s "File is here" someone#somewhere.com < /var/tmp/temp.file
The script would run great if run directly and the mail would come with the correct output. However, when run from cron, (crontab -e) the mail would come with no content. Tried many things around permissions and shells and paths etc. However no joy!
Finally found this:
*/20 * * * * scriptblah.sh > $HOME/cron.log 2>&1
And on that cron.log file found this output:
the input device is not a TTY
Search led me here. And after I removed the -t, it's working great now!
docker exec **-i** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file

How to get /etc/profile to run automatically in Alpine / Docker

How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile

Resources