I'm sending the gradle test reports to $CIRCLE_ARTIFACTS (also tried $CIRCLE_TEST_REPORTS) but all the html pages such as index.html are served as text/plain so it doesn't display the reports nicely.
Is there a way to tell CircleCI to display the content as html or is there another location this will work from?
My test section
test:
override:
- ./gradlew check jacocoTestReport
post:
- cp -R build/reports/ $CIRCLE_TEST_REPORTS
- mkdir -p $CIRCLE_TEST_REPORTS/junit/
- find . -type f -regex ".*/build/test-results/.*xml" -exec cp {} $CIRCLE_TEST_REPORTS/junit/ \;
Related
I am downloading and unzipping binaryen in a run step.
- run: wget -c https://github.com/WebAssembly/binaryen/releases/download/version_101/binaryen-version_101-x86_64-linux.tar.gz -O - | tar -xz -C /tmp/
I am then updating the path in $BASH_ENV.
- run: echo "export PATH=/tmp/binaryen-version_101/bin/wasm-opt:\${PATH}" >> $BASH_ENV
However, I still get a command not found for wasm-opt.
How can I install the downloaded wasm-opt binary such that another run step can use it?
The main issue is that the PATH variable should contain a list of directories. You added the actual binary itself to the path instead of the directory it resides in.
So for example, instead of /tmp/binaryen-version_101/bin/wasm-opt you want /tmp/binaryen-version_101/bin/. Also, after you add a directory to the PATH you won't be able to run those binaries until the next step.
Here's an example config I made:
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
docker:
- image: cimg/base:stable
steps:
- checkout
- run: curl -sSL "https://github.com/WebAssembly/binaryen/releases/download/version_101/binaryen-version_101-x86_64-linux.tar.gz" | tar -xz -C /tmp/
- run: echo 'export PATH=/tmp/binaryen-version_101/bin/:${PATH}' >> $BASH_ENV
- run: wasm-opt
Hadolint is an awesome tool for linting Dockerfiles. I am trying
to integrated to my CI but I am dealing with for run over multiple Dockerfiles. Does someone know how the syntax look like? Here is how my dirs appears to:
dir1/Dockerfile
dir2/Dockerfile
dir3/foo/Dockerfile
in gitlab-ci
stage: hadolint
image: hadolint/hadolint:latest-debian
script:
- mkdir -p reports
- |
hadolint dir1/Dockerfile > reports/dir1.json \
hadolint dir2/Dockerfile > reports/dir2.json \
hadolint dir3/foo/Dockerfile > reports/dir3.json
But the sample above is now working.
So as far as I found it, hadolint runs recursively. So in my case:
- hadolint */Dockerfile > reports/all_reports.json
But the problem with this approach is that all reports will be in one file which humper the maintenance and clarity
If you want to keep all reports separated (one per top-level directory), you may want to rely on some shell snippet?
I mean something like:
- |
find . -name Dockerfile -exec \
sh -c 'src=${1#./} && { set -x && hadolint "$1"; } | tee -a "reports/${src%%/*}.txt"' sh "{}" \;
Explanation:
find . -name Dockerfile loops over all Dockerfiles in the current directory;
-exec sh -c '…' runs a subshell for each Dockerfile, setting:
$0 = "sh" (dummy value)
$1 = "{}" (the full, relative path of the Dockerfile), "{}" and \; being directly related to the find … -exec pattern;
src=${1#./} trims the path, replacing ./dir1/Dockerfile with dir1/Dockerfile
${src%%/*} extracts the top-level directory name (dir1/Dockerfile → dir1)
and | tee -a … copies the output, appending hadolint's output to the top-level directory report file, for each parsed Dockerfile (while > … should be avoided here for obvious reasons, if you have several Dockerfiles in a single top-level directory).
I have replaced the .json extension with .txt as hadolint does not seem to output JSON data.
I am using security scan software in my Dockerfile and I need to add its bin folder to the path. Its path will contain the version part so I do not know the path until I download the software. My current progress is something like this:
1.Download the software:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
2.The desired folder is located: SAClientUtil/SAClientUtil.X.Y.Z/bin/ (xyz mary vary from run to run). Get there using find and cd combination and try to add it to the PATH:
RUN cd "$(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"; \
export PATH="$PATH:$PWD"; # doesn't work
Looks like ENV command is not evaluating the parameter, so
ENV PATH $PATH:"echo $(dirname "$(find SAClientUtil -type f -name appscan.sh | head -1)")"
doesn't work also.
Any ideas on how to dynamically add a folder to the PATH during docker image build?
If you're pretty sure the zip file will contain only a single directory with that exact layout, you can rename it to something fixed.
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip \
&& unzip SAClientUtil.zip -d tmp \
&& mv tmp/SAClientUtil.* SAClientUtil \
&& rm -rf tmp SAClientUtil.zip
ENV PATH=/SAClientUtil/bin:${PATH}
A simple solution would be to include a small wrapper script in your image, and then use that to run commands from the SAClientUtil directory. For example, if I have the following in saclientwrapper.sh:
#!/bin/sh
cmd=$1
shift
saclientpath=$(ls -d /SAClientUtil/SAClientUtil.*)
echo "got path: $saclientpath"
cd "$saclientpath"
exec "$saclientpath/bin/$cmd" "$#"
Then I can do this:
RUN curl https://cloud.appscan.com/api/SCX/StaticAnalyzer/SAClientUtil?os=linux --output SAClientUtil.zip
RUN unzip SAClientUtil.zip -d SAClientUtil
COPY saclientwrapper.sh /saclientwrapper.sh
RUN sh /saclientwrapper.sh appscan.sh
And this will produce, when building the image:
STEP 6: RUN sh /saclientwrapper.sh appscan.sh
got path: /SAClientUtil/SAClientUtil.8.0.1374
COMMAND SYNTAX
appscan <command> [options]
ADDITIONAL COMMAND HELP
appscan help <command>
.
.
.
I am trying to use Bitbucket Pipelines for my Android project.
There is my bitbucket-pipelines.yml :
image: javiersantos/android-ci:latest
pipelines:
default:
- step:
script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod +x ./gradlew
- ./gradlew assembleDebug
When i am running my pipeline, i have this error :
+ chmod +x ./gradlew
chmod: cannot access './gradlew': No such file or directory
What i missed in my pipeline configuration ?
You need to cd into the folder that contains your gradlew file, if it is not in the root folder of your Git repository. You can check where is your gradlew file in the Docker build machine, by checking the $BITBUCKET_CLONE_DIR environment variable, or listing the files:
pipelines:
default:
- step:
script:
- echo "The current folder is: $PWD"
- echo "The git repo root folder is: $BITBUCKET_CLONE_DIR"
- echo "The files in the git root directory are: $(ls -la)"
- echo "The folders in the git root directory are: $(echo */)"
- echo "The gradlew file is at location: $( find . -name "gradlew" -type f -print0 | xargs )"
- echo "Setting current working directory to subfolder"
- cd MyProject # This should contain your 'gradlew' file
- chmod +x ./gradlew
- ./gradlew assembleDebug
Im currently building a custom docker image to be used for integration test. My requirement is to set it up with custom configuration with default ingester pipeline and template mappings.
Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.2
ADD config /usr/share/elasticsearch/config/
USER root
RUN chown -R elasticsearch:elasticsearch config
RUN chmod +x config/setup.sh
USER elasticsearch
RUN elasticsearch-plugin remove x-pack
EXPOSE 9200
EXPOSE 9300
where config is a directory which contains:
> elasticsearch.yml for the configuration
> templates in the form of json files
> setup.sh - script which executes curl to es in order to register pipelines to _ingester and template mappings
The setup script looks like this:
#!/bin/bash
# This script sets up the es5 docker instance with the correct pipelines and templates
baseUrl='127.0.0.1:9200'
contentType='Content-Type:application/json'
# filebeat
filebeatUrl=$baseUrl'/_ingest/pipeline/filebeat-pipeline?pretty'
filebeatPayload='#pipeline/filebeat-pipeline.json'
echo 'setting filebeat pipeline...'
filebeatResult=$(curl -XPUT $filebeatUrl -H$contentType -d$filebeatPayload)
echo -e "filebeat pipeline setup result: \n$filebeatResult"
# template
echo -e "\n\nsetting up templates..."
sleep 1
cd template
for f in *.json
do
templateName="${f%.*}"
templateUrl=$baseUrl'/_template/'$templateName
echo -e "\ncreating index template for $templateName..."
templateResult=$(curl -XPUT $templateUrl -H$contentType -d#$f)
echo -e "$templateName result: $templateResult"
sleep 1
done
echo -e "\n\n\nCompleted ES5 Setup, refer to logs for details"
How do i build and run the image in such a way that the script gets executed AFTER elastic is up and running?
What I usually do is to include a warmer script like yours and at the beginning I add the following lines. There's no other way that I know of in Docker to wait for the underlying service to launch
# wait until ES is up
until $(curl -o /dev/null -s --head --fail $baseUrl); do
echo "Waiting for ES to start..."
sleep 5
done
If template mapping is not evolving frequently then you can try below solution:
You can embed template in your custom image by saving container state(creating new image) using following steps:
Run your image as per your dockerfile(elasticsearch would have been stared in it)
Use docker exec command to run your template(curl command or script)
Use docker commit to save container state and create new image which will already have template
Use newly created image which already has template mapping.You don't need to run template mapping as part of script since your image itself will have it.