Can I use WORKDIR in my Dockerfile with Github Actions? (Also, resolving Jest "No tests found") - docker

Can I use WORKDIR in my Dockerfile with Github Actions?
I am switching from one CI provider to Github Actions and found that I had a step that runs docker run <temp_image> npm test -- --coverage and something seemed to be altering the way my Jest test were run, compared to my previous CI, and I would receive the error:
No tests found, exiting with code 1
Run with `--passWithNoTests` to exit with code 0
In /app
18 files checked.
testMatch: /**/?(*.)+(spec|test).[jt]s?(x) - 1 match
testPathIgnorePatterns: /.next/, /node_modules/, /testconfig/ - 16 matches
testRegex: - 0 matches
Pattern: - 0 matches
That one testMatch would run correctly in my previous CI solution, using the same command.
Some with this error were accidentally ignoring their test path : Jest No Tests found
I tried a bunch of different approaches -- the main hunch being that my tests were being run in the incorrect directory. Using my shotgun approach I tried:
Specifying <rootdir> for testMatch and testPathIgnorePatterns
Removing testPathIgnorePatterns altogether
Specifying the --config path, also --no-cache, options for Jest
Specifying the -w working directory on docker run options
And a multi-command approach for docker run /bin/sh/ (cd /app && npm test)
Ultimately, I found this line in Github Actions Docs: Dockerfile Instructions And Overrides
GitHub sets the working directory path in the GITHUB_WORKSPACE environment variable. It's recommended to not use the WORKDIR instruction in your Dockerfile.
Removing the WORKDIR instruction in my Dockerfile fixes my error
BUT, Is there a way around having to remove WORKDIR from my Dockerfile? It seems to maybe be a Docker best practice to use WORKDIR and I would prefer to follow Docker guidelines than Github Actions.
Thank you for your time!

Related

Share go modules with docker builder stage

[EDIT - added clarity]
Here is my current env setup :
$GOPATH = /home/fzd/go
projectDir = /home/fzd/go/src/github.com/fzd/amazingo
amazingo has a go.mod file that lists several (let's say thousands) dependencies.
So far, I used to go build -t bin/amazingo cmd/main.go, but I want to share this with other people and have a build command that is environment-independent. Using go build has the advantage of downloading each dependency once -- and then using those in ${GOPATH}/pkg/mod, which saves time and bandwidth.
I want to build in a multistage docker image, so I go with
> cat /home/fzd/go/src/github.com/fzd/amazingo/Dockerfile
FROM golang:1.17 as builder
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/amazingo cmd/main.go
FROM alpine:latest
COPY --from=builder /bin/amazingo /amazingo
ENTRYPOINT ["/amazingo"]
As you can expect it, the builder is "naked" when I start it, so it has to download all my dependencies when I docker build -t amazingo:0.0.1 . . But it will do so everytime I call it, which can be several times a day.
Fortunately, I already have most of these dependencies on my disk. I would be happy to share these files (that are located in my $GOPATH/pkg/mod) with the builder, and help it build faster on my machine.
So the question is: how can I share my ${GOPATH} (or ${GOPATH}/mod/pkg) with the builder ?
I tried adding the following to the builder
ARG SRC_GOPATH
COPY ${SRC_GOPATH} /go
and call docker build --build-arg SRC_GOPATH=${GOPATH} -o amazingo:0.0.1 ., but it wasn't good enough - I got an error (COPY failed: file not found in build context or excluded by .dockerignore: stat home/fzd/go: file does not exist)
I hope this update brings a bit more clarity to the problem.
=======
I have a project with a go.mod file.
I want to build that project using a multistage docker image.
(this article is a perfect example)
The issue is that I have "lots" of dependencies, and each of them will be downloaded inside my Docker builder stage.
Is there a way to "share" my GOPATH/pkg/mod with the docker build... command (in some ways, having a local cache) ?
Your end goal isn't completely clear, but the way that I use a multistage build would look something like this for a (dirt-simple) go app, assuming that you ultimately want the docker container to run your go app. You will need to get your source into the build container somehow as well - that is not shown here:
FROM golang:1.17.2-alpine3.14 as builder
WORKDIR /my/app/source/dir
RUN go get && go build -o /path/to/my/app/binary
FROM alpine3.14 AS release
# install runtime deps, if any
# create necessary files and folders, if any
COPY --from=builder /path/to/my/app/binary /usr/local/bin
ENTRYPOINT /usr/local/bin/binary --options
In this way, the source of your application and all dependencies will not be present in the released image, only the compiled binary.
Of course you don't have to specify an output path for that, I think it just makes it a little clearer in this example. And of course you can use whatever base image/images you want to - I'm treating this as though you don't need the go runtime on your release image.

Using JMeter plugins with justb4/jmeter Docker image results in error

Goal
I am using Docker to run JMeter in Azure Devops. I am trying to use Blazemeter's Parallel Controller, which is not native to JMeter. So, according to the justb4/jmeter image documentation, I used the following command to get the image going and run the JMeter test:
docker run --name jmetertest -i -v /home/vsts/work/1/s/plugins:/plugins -v $ROOTPATH:/test -w /test justb4/jmeter ${#:2}
Error
However, it produces the following error while trying to accommodate for the plugin (I know the plugin makes the difference due to testing without the plugin):
cp: can't create '/test/lib/ext': No such file or directory
As far as I understand, this is an error produced when one of the parent directories of the directory you are trying to make does not exist. Is there something I am doing wrong, or is there actually something wrong with the image?
References
For reference, I will include links to the image documentation and the repository.
Image: https://hub.docker.com/r/justb4/jmeter
Repository: https://github.com/justb4/docker-jmeter
Looking into the Dockerfile:
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
Looking into entrypoint.sh
if [ -d /plugins ]
then
for plugin in /plugins/*.jar; do
cp $plugin $(pwd)/lib/ext
done;
fi
It basically copies the plugins from /plugins folder (if it is present) to /lib/ext folder relative to current working directory
I don't know why did you add this stanza -w /test to your command line but it explicitly "tells" the container that local working directory is /test, not /opt/apache-jmeter-xxxx, that's why the script is failing to copy the files.
In general I don't think that the approach is very valid because:
In Azure DevOps you won't have your "local" folder (unless you want to add plugins binaries under the version control system)
Some JMeter Plugins have other .jars as the dependencies so when you're installing the plugin you should:
put the plugin itself under /lib/ext folder of your JMeter installation
put the plugin dependencies under /lib folder of your JMeter installation
So I would recommend amending the Dockerfile, download JMeter Plugins Manager and installed the plugin(s) you need from the command line
Something like:
RUN wget https://jmeter-plugins.org/get/ -O /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar
RUN wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2/cmdrunner-2.2.jar -P /opt/apache-jmeter-${JMETER_VERSION}/lib/
RUN java -cp /opt/apache-jmeter-${JMETER_VERSION}/lib/ext/jmeter-plugins-manager.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
RUN /opt/apache-jmeter-${JMETER_VERSION}/bin/./PluginsManagerCMD.sh install bzm-parallel

Azure DevOps builds for Docker with npm installing from private feed have stopped working

I have a few Docker builds on Azure DevOps for React apps which include packages from a private npm feed also hosted on Azure DevOps. Recently the builds have started failing at the npm install command.
In order to authenticate the container to install from the private feed I've always used an .npmrc file. This is saved locally as .npmrc.docker and looks like this:
#<package-scope>:registry=https://<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/registry/:email=npm requires email to be set but doesn't use the value
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:username=<feed-name>
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:_password=${NPM_TOKEN}
//<devops-username>.pkgs.visualstudio.com/_packaging/<feed-name>/npm/:email=npm requires email to be set but doesn't use the value
I define a scoped package source at the top and the rest is generated from Azure DevOps via its Connect to feed wizard. ${NPM_TOKEN} is my feed password which I pass into the docker build command as a build argument.
The part of my Dockerfile which uses this looks like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install
RUN rm -f .npmrc
In my Azure DevOps build pipeline this has always worked. The Build image part of the pipeline feeds in this build arg from a variable like this - --build-arg NPM_TOKEN=$(ArtifactsNpmPat) - where ArtifactsNpmPat is a variable in my library.
Recently my builds have started failing. Initially I assumed my token had expired so I generated and stored a new one. Here's the error from the agent:
[error]The command '/bin/sh -c npm install' returned a non-zero code: 1
[error]The process '/usr/bin/docker' failed with exit code 1
Note that the same process continues to work locally. So I've no idea how to diagnose this. I did find this SO post which led me to update my Dockerfile to look like this:
FROM node:alpine as build
ARG NPM_TOKEN
COPY ./.npmrc.docker /app/.npmrc
COPY ./package.json /app/package.json
WORKDIR /app
RUN npm install -g vsts-npm-auth
RUN vsts-npm-auth -config .npmrc
RUN npm install
RUN rm -f .npmrc
However, a docker build now generates a pretty crazy error from the RUN vsts-npm-auth command.
/usr/local/bin/vsts-npm-auth: line 1: MZ�╚╝���#���: not found
/usr/local/bin/vsts-npm-auth: line 1: �ԞO���: not found
/usr/local/bin/vsts-npm-auth: line 57: ╔╚��║[�
╔0�
&� �# ╗╝═�╗��╔╚.rsrc�##.reloc
�╗�#�H╗║�?�Y
╗*═d��╚���╗(&
╗╚s'
}║╝╗╗╚═}═╝*╗{═╝*0╗4╔╗╚( ═╔╗╚(
═╔╚╚(
═╔╗╚(
═╔╗╚(
+═*0╔╗╚/(═
+═*0╝M╗╗{║╝o(
═()
�╔
,+╚═o*
�╔ ,s+
z═o,
Xo-
╝+║╚╝+╝*0╔╗╚?(═
+═*0╔╗╚#(═
+═*0A╚╚r╔po.
-╗{║╝o/
r╔p(0
+╔
═,║╚
+╗╚/(═
+*0╝�╝╗╚╝║(═
═�╔ ,&═╝╝,r║p╝�S╔╚(1
s2
z╚║+Z╚═o3
╚╚o,
═X(4
o5
╝-╚+║rgp║-╗(6
%-═&rgp+║rgp╝-rgp+(7
║+║*0╗║║- ╚╝o8
+╚╝o9
+═*0╚L╗(&
╗╚}
╝╗║}╝╗═}╝╗╗(═}╝*╝═
╗{╝o:
╗(╗{
╝s;
╗(═o
╝+#╝o
║╗ ║(═══�╚,╗
╝o
╝╝o
0�s<╔╗30c
╗{╝ripo=
�╚,E(>
r�p�╔%rip�%�(?
�S╔%;�o#
═ sA
oB
═╚╗{╝sC
oB
═sD
╝+╝*0║╗{╝-rp╔p+║r�╔p
╝═╗{╝oE
+*0╗# ╗{
╝%-
&╗{╝(═: not found
/usr/local/bin/vsts-npm-auth: line 58: syntax error: unexpected word (expecting ")")
So I'm stuck. Has something changed in DevOps around authenticating with its private feeds? Not that I'm aware of, but like I say these builds just stopped working some time in October without me changing anything. Advice appreciated.
You are trying to run vsts-npm-auth which is not compatible with Linux platform, which in azure docs in documentation is explained that you only need .npmrc file with credentials
As answered by dawid debinski, vsts-npm-auth doesn't run on Linux platform as written here.
Check this documentation out in which you have to authenticate your pipeline with npm authenticate, or use the YAML version.

Docker and trying to build an image using Azure Pipelines

Hopefully someone can help me see the wood for the trees as they say!
I am no Linux expert and therefore I am probably missing something very obvious.
I have a dockerfile which contains the following:
FROM node:9.8.0-alpine as node-webapi
EXPOSE 3000
LABEL authors="David Sheardown"
COPY ["package.json", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . /home/vsts/work/1/s/
CMD ["node", "index.js"]
I then have an Azure pipeline setup as the following image shows:
My issue seems to be the build process cannot find the dockerfile itself:
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/s/**/Dockerfile was found.
Again, apologies in advance for my lack of Linux knowledge.. there is something silly I have done or not done ;)
P.S: I forgot to mention in Azure Pipelines I am using "Hosted Linux Preview"
-- UPDATE --
This is the get sources stage:
I would recommend adding the exact path to where the docker file resides on your repository .
Dockerfile: subpath/Dockerfile`
You're misusing this absolute path, both within the dockerfile and in the docker build task:
/home/vsts/work/1/s/
That is a path that exists on the build agent (not within the dockerfile) - but it may or may not exist on any given pipeline run. If the agent happens to use work directory 2, or 3, or any other number, then your path will be invalid. If you want to run this pipeline on a different type of agent, then your path will be invalid.
If you want to use a dockerfile in your checked out code, then you should do so by using a relative path (based on the root of your code repository), for example:
buildinfo/docker/Dockerfile
Note: that was just an example, to show the kind of path you should use; here you should be using the actual relative path in your actual code repo.

Development dependencies in Dockerfile or separate Dockerfiles for production and testing

I'm not sure if I should create different Dockerfile files for my Node.js app. One for production without the development dependencies and one for testing with the development dependencies included.
Or one file which is basically the development Dockerfile.dev. Then main difference of both files is the npm install command:
Production:
FROM ...
...
RUN npm install --quiet --production
...
CMD ...
Development/Test:
FROM ...
...
RUN npm install
...
CMD ...
The question arises because I want to be able to run my tests inside the container via docker run command. Therefore I need the test dependencies (typically dev dependencies for me).
Seems a little bit odd to put dependencies not needed in production into the image. On the other hand creating/maintaining a second Dockerfile.dev which just minor differences seems also not right. So what is the a good practise for this kind of problem.
No, you don't need to have different Dockerfiles and in fact you should avoid that.
The goal of docker is to ship your app in an immutable, well tested artifact (docker images) which is identical for production and test and even dev.
Why? Because if you build different artifacts for test and production how can you guarantee what you have already tested is working in production too? you can't because they are two different things.
Given all that, if by test you mean unit tests, then you can mount your source code inside docker container and run tests without building any docker images. And that's fine. Remember you can build image for tests but that terribly slow and makes development quiet difficult and slow which is not good at all. Then if your test passed you can build you app container safely.
But if you mean acceptance test that actually needs to run against your running application then you should create one image for your app (only one) and run tests in another container (mount test source code for example) and run tests against that container. This obviously means what your build for your app is different for npm installs for your tests.
I hope this gives you some over view.
Well then you'll have to support several Dockerfiles that are almost identical. Instead I recommend to use NodeJS feature like production profile. And another one recommendation regarding to
RUN npm install --quiet --production
It is better to create separate .sh file and do something like this instead:
ADD ./scripts/run.sh /run.sh
RUN chmod +x /*.sh
And also think about to start using Gulp.
UPD #1
By default npm install installs devDependencies. In order to get around this - use npm install --production OR set the NODE_ENV environment variable to production value.
Putting script line in separate file is a good practice in order not to change Dockerfile often. If you'll need changes next time then you'll have to update only script-file and you're done. In future you could also have some additional work to do.

Resources