CircleCI run pylama/pycodestyle not terminating - circleci

I'm trying to use CircleCI to check if the codestyle guidlines have been applied correctly otherwise it should fail.
This is the config I have
- run:
name: Check codestyle guidlines
command: |
. venv/bin/activate
pylama --options setup.cfg --ignore C901
pycodestyle --config setup.cfg
But the task keeps running on forever. I don't know why!?

I figured it out, had to exclude the vent/* directory in the setup.cfg since it's in the same directory as the actual application and it contains all libraries

Related

Using JMeter plugins with justb4/jmeter Docker image results in error

Goal
I am using Docker to run JMeter in Azure Devops. I am trying to use Blazemeter's Parallel Controller, which is not native to JMeter. So, according to the justb4/jmeter image documentation, I used the following command to get the image going and run the JMeter test:
docker run --name jmetertest -i -v /home/vsts/work/1/s/plugins:/plugins -v $ROOTPATH:/test -w /test justb4/jmeter ${#:2}
Error
However, it produces the following error while trying to accommodate for the plugin (I know the plugin makes the difference due to testing without the plugin):
cp: can't create '/test/lib/ext': No such file or directory
As far as I understand, this is an error produced when one of the parent directories of the directory you are trying to make does not exist. Is there something I am doing wrong, or is there actually something wrong with the image?
References
For reference, I will include links to the image documentation and the repository.
Image: https://hub.docker.com/r/justb4/jmeter
Repository: https://github.com/justb4/docker-jmeter
Looking into the Dockerfile:
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
Looking into entrypoint.sh
if [ -d /plugins ]
then
for plugin in /plugins/*.jar; do
cp $plugin $(pwd)/lib/ext
done;
fi
It basically copies the plugins from /plugins folder (if it is present) to /lib/ext folder relative to current working directory
I don't know why did you add this stanza -w /test to your command line but it explicitly "tells" the container that local working directory is /test, not /opt/apache-jmeter-xxxx, that's why the script is failing to copy the files.
In general I don't think that the approach is very valid because:
In Azure DevOps you won't have your "local" folder (unless you want to add plugins binaries under the version control system)
Some JMeter Plugins have other .jars as the dependencies so when you're installing the plugin you should:
put the plugin itself under /lib/ext folder of your JMeter installation
put the plugin dependencies under /lib folder of your JMeter installation
So I would recommend amending the Dockerfile, download JMeter Plugins Manager and installed the plugin(s) you need from the command line
Something like:
RUN wget https://jmeter-plugins.org/get/ -O /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar
RUN wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -P /opt/apache-jmeter-${JMETER_VERSION}/lib/
RUN java -cp /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
RUN /opt/apache-jmeter-${JMETER_VERSION}/bin/./PluginsManagerCMD.sh install bzm-parallel

How to generate and display coverage when running tests with Pongo for custom Kong API Gateway plugins written in Lua

I am writing a few kong custom plugins in Lua. I am using Kong 2.3.3 and Lua 5.1.
I have some test cases (unit tests + integration tests) and i am running them with pongo run -coverage option. I have already installed luacov (and also cluacov, both with luarocks install) and all my tests are passing but no luacov files are being generated with coverage data. I am not running pongo from Docker, i have installed and configured it in my local machine (which is Linux Ubuntu 20.04).
I have already tried a few things as follows:
my .busted file is setting coverage = true, verbose = true and output = "gtest" (already tried utfTerminal, tap and json too)
tried adding luacov as a dependency to my rockspec file... the build does not fail but no coverage file is generated
i even tried running the tests without pongo, using busted directly but this is a very bad option because things like spec.helpers, or the cjson lib are not set in my LUAPATH
A quick way to do this is to modify pongo
Edit your pongo.sh file to:
add coverage flag to busted --coverage
call luacov to generate the report luacov
display the report cat luacov.report.out
locate where busted is called, line 959 for me:
"/bin/sh" "-c" "bin/busted --coverage --helper=bin/busted_helper.lua ${busted_params[*]} ${busted_files[*]};luacov;cat luacov.report.out"
Install luacov, edit assets/Dockerfile
after busted installation add luacov:
&& luarocks install busted-htest \
&& luarocks install luacov \
pongo run will give you
[...]
==============================================================================
Summary
==============================================================================
File Hits Missed Coverage
-------------------------------------------------------------------------------------------------------
/kong-plugin/kong/plugins/myplugin/schema.lua 105 1 99.06%
/kong-plugin/spec/myplugin/01-schema_spec.lua 199 5 97.55%
[...]
You can create a docker image based on pongo
spec/unit/docker/Dockerfile
FROM kong-pongo-test:2.3.2
USER root
RUN luarocks install luacov
WORKDIR /kong-plugin
COPY . .
spec/unit/docker/run.sh
#!/bin/sh
busted --coverage spec/unit
luacov
cat luacov.report.out
Run
docker build -f spec/unit/docker/Dockerfile -t my-coverage .
docker run my-coverage sh spec/unit/docker/run.sh
Pongo gained some support for this (still a PR). Note that it only covers unit tests, not integration ones.
See https://github.com/Kong/kong-pongo/pull/184
btw: the other anwers are too complex imo, you can add .pongo/pongo-setup.sh to install LuaCov, and move the .luacov file from /kong-plugin to /kong. That should be all that is necessary.
Running tests with coverage can be simply done by passing the flag, without any need to edit pongo or the dockerfile. Try pongo run -- --coverage for example.

Can I use WORKDIR in my Dockerfile with Github Actions? (Also, resolving Jest "No tests found")

Can I use WORKDIR in my Dockerfile with Github Actions?
I am switching from one CI provider to Github Actions and found that I had a step that runs docker run <temp_image> npm test -- --coverage and something seemed to be altering the way my Jest test were run, compared to my previous CI, and I would receive the error:
No tests found, exiting with code 1
Run with `--passWithNoTests` to exit with code 0
In /app
18 files checked.
testMatch: /**/?(*.)+(spec|test).[jt]s?(x) - 1 match
testPathIgnorePatterns: /.next/, /node_modules/, /testconfig/ - 16 matches
testRegex: - 0 matches
Pattern: - 0 matches
That one testMatch would run correctly in my previous CI solution, using the same command.
Some with this error were accidentally ignoring their test path : Jest No Tests found
I tried a bunch of different approaches -- the main hunch being that my tests were being run in the incorrect directory. Using my shotgun approach I tried:
Specifying <rootdir> for testMatch and testPathIgnorePatterns
Removing testPathIgnorePatterns altogether
Specifying the --config path, also --no-cache, options for Jest
Specifying the -w working directory on docker run options
And a multi-command approach for docker run /bin/sh/ (cd /app && npm test)
Ultimately, I found this line in Github Actions Docs: Dockerfile Instructions And Overrides
GitHub sets the working directory path in the GITHUB_WORKSPACE environment variable. It's recommended to not use the WORKDIR instruction in your Dockerfile.
Removing the WORKDIR instruction in my Dockerfile fixes my error
BUT, Is there a way around having to remove WORKDIR from my Dockerfile? It seems to maybe be a Docker best practice to use WORKDIR and I would prefer to follow Docker guidelines than Github Actions.
Thank you for your time!

jenkinsci docker install-plugins.sh fails build

I am trying to package the vanilla Jenkins image into Docker using this tutorial: https://github.com/jenkinsci/jenkinsfile-runner/blob/master/DOCKER.md
Everything works until one of the last steps where the Dockerfile tries to run install-plugins.sh from a plugins.txt file that was just copeid into its own directory. This is the error I am getting when running docker build:
/usr/local/bin/install-plugins.sh: line 148: TEMP_ALREADY_INSTALLED: unbound variable
The command '/bin/sh -c /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt' returned a non-zero code: 1
Here is my plugins.txt file:
pipeline-model-definition:latest
Just the one line.
I cannot seem to figure out what might fix this issue. I tried using the suggestion from this answer here: https://github.com/jenkinsci/docker/issues/348 but the command line spat out the exact same error as above. Any help is appreciated, thanks in advance.
That variable was defined in plugins.sh (which is deprecated and supposed to be replaced by install-plugins.sh)
# the war includes a # of plugins, to make the build efficient filter out
# the plugins so we dont install 2x - there about 17!
if [ -d "$JENKINS_HOME" ]
then
TEMP_ALREADY_INSTALLED=$JENKINS_HOME/preinstalled.plugins.$$.txt
else
echo "ERROR $JENKINS_HOME not found"
exit 1
fi
But it is not defined in install-plugins.sh, only used (in line 155)
Try and set TEMP_ALREADY_INSTALLED first, as shown above, before calling install-plugins.sh.

Error compiling Go from source in an Alpine Docker container: "loadinternal: cannot find runtime/cgo"

I'm trying to build an Alpine Docker image for the FIPS-enabled version of Go. To do this, I am trying to build Go from source using the dev.boringcrypto branch of the golang/go repository.
Upon running ./all.bash, I get the following errors:
Step 4/4 : RUN cd go/src && ./all.bash
---> Running in 00db552598f7
Building Go cmd/dist using /usr/lib/go.
# _/go/src/cmd/dist
loadinternal: cannot find runtime/cgo
/usr/lib/go/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-
alpine-linux-musl/bin/ld: cannot find Scrt1.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-
alpine-linux-musl/bin/ld: cannot find crti.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lssp_nonshared
collect2: error: ld returned 1 exit status
The command '/bin/bash -c cd go/src && ./all.bash' returned a non-zero code: 2
Which causes the installation tests to fail and kicks me out of the Docker image build.
I have the gcc installed on the image, and tried setting the environment variable CGO_ENABLED=0 as suggested in other questions, but neither of these things seem to alleviate the problem.
I'm at my wits end with this problem. Has anybody else run into similar issues in the past? I don't understand why this is happening, as the build runs fine in an Ubuntu container.
Thanks!
I had the same error messages, although I was compiling a different project.
It turns out that alpine needs to have the musl-dev package installed for this to work, so I think you need to make sure that is included in your Dockerfile, or manually install it by running apk add --no-cache musl-dev.
Either go isn't correctly installed on the image or the GOROOT is wrong
Put go tool dist banner and go tool dist env in your all.bash for clues

Resources