Push JHipster 6.0.1 to Gitlab repository - docker

After having everything ready to deploy, I realized JHipster doesn't have a Dockerfile anymore and the packaging is done with jib. The generated gitlab-ci.yml has a docker-push stage with a command like this:
./mvnw jib:build -Djib.to.image=$IMAGE_TAG -Djib.to.auth.username=gitlab-ci-token -Djib.to.auth.password=$CI_BUILD_TOKEN
but it fails with
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.11:build (default-cli) on project test-project: Obtaining project build output files failed; make sure you have compiled your project before trying to build the image. (Did you accidentally run "mvn clean jib:build" instead of "mvn clean compile jib:build"?): /builds/amatos/test-project/target/classes -> [Help 1]
As that failed, I tried to run the command locally like this:
./mvnw jib:build -Djib.to.image=registry.gitlab.com/amatos/test-project:v6.0.1 -Djib.to.auth.username=amatos -Djib.to.auth.password=password
but instead of trying to connect to Gitlab's registry, it tries to connect to registry.hub.docker.com:
[INFO] Retrieving registry credentials for registry.hub.docker.com...
What I would like to know is: how do I set it to connect to Gitlab instead of Docker Hub?

In order to connect to a custom repository, I changed -Djib.to.image to -Dimage and it worked

This is followed by jhipster/generator-jhipster issue 9761 which states:
as the docker-push is done in another stage, there is a missing - target/classes in the previous stage.
It is needed by jib. It should look like:
maven-package:
stage: package
script:
- ./mvnw verify -Pprod -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME
artifacts:
paths:
- target/*.jar
- target/classes
expire_in: 1 day
Possibly addressed by PR (merged) 9762, commit 50cc009, which is only in master, not yet referenced by any tag.

Related

GitLab CI : Runner Docker and cache Maven repository does not working

I'm trying to setup a GitLab CI pipeline on a Docker runner.
I have a Docker runner with a Maven image (maven:3.8.6-openjdk-11) that I'm trying to use for my pipeline that compiles a Maven project.
I have set maven repository cache but I feel like this cache is not working properly. Every time my pipeline runs, it downloads the dependencies that are useful for compiling my project, onto my Nexus artifactory... I expected it to download only the "new" dependencies, and for the others, they use the cache without downloading them.
Below is the content of the .gitlab-ci.yml file:
variables:
MAVEN_OPTS: >-
-Dhttps.protocols=TLSv1.2
-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository
-Dorg.slf4j.simpleLogger.showDateTime=true
-Djava.awt.headless=true
cache:
paths:
- .m2/repository/
build:
tags:
- tdev
script:
- mvn --settings $MAVEN_SETTINGS clean compile
On each run it download the dependencies:
[...]
Downloading from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/core/wildfly-core-security/19.0.0.Final/wildfly-core-security-19.0.0.Final.jar
Downloaded from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/core/wildfly-controller-client/19.0.0.Final/wildfly-controller-client-19.0.0.Final.jar (214 kB at 70 kB/s)
Downloading from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/security/wildfly-elytron-auth/2.0.0.Final/wildfly-elytron-auth-2.0.0.Final.jar
Downloaded from infogreffe: https://xxxx/nexus/repository/infogreffe/com/oracle/ojdbc5/11.2.0.2.0/ojdbc5-11.2.0.2.0.jar (2.0 MB at 666 kB/s)
[...]
Note that I use a local cache.
Thanks.
I expect that with using a cache for my maven repository, it won't try to download dependencies on every pipeline run unless there are new dependencies. But maybe I misunderstood how caching works...

Caching all dependencies in Docker image that has jetty:run in CMD

I'm trying to alter Dockerfile ending with:
ENTRYPOINT ["mvn", "clean", "jetty:run"]
to make it download all the Maven dependencies into the image, not when you run a container.
I added:
RUN mvn dependency:go-offline
It downloaded a lot of dependencies during build, but not all required to call jetty:run.
Because when alter the ENTRYPOINT
ENTRYPOINT ["mvn", "-o", "jetty:run"]
it fails when you run image:
[ERROR] Failed to execute goal org.mortbay.jetty:jetty-maven-plugin:8.1.16.v20140903:run (default-cli) on project LODE: Execution default-cli of goal org.mortbay.jetty:jetty-maven-plugin:8.1.16.v20140903:run failed: Plugin org.mortbay.jetty:jetty-maven-plugin:8.1.16.v20140903 or one of its dependencies could not be resolved: Cannot access central (https://repo.maven.apache.org/maven2) in offline mode and the artifact org.codehaus.plexus:plexus-utils:jar:1.1 has not been downloaded from it before. -> [Help 1]
Can you somehow tell go-offline that you want to cache all the dependencies required to run a specific target (in this case jetty:run)? Maybe I can solve my main problem in any other way (it's actually not Docker-specific, but I wanted to show the motivation behind it)?
The project I'm altering is: https://github.com/essepuntato/LODE

Gitlab docker runner on Windows : msbuild fails

For months I’m using several shared runners to run my GitLab CI job. All my runners are running on linux host or macOS and it works great! This job is compiling .Net Standard project using mono:latest image.
I use msbuild with nugget restore and here is my problem. I recently added my first shared runner on Windows, still using docker runner as usual such as described here (which is the way I always do). But with this new Windows shared runner and only this one my job fails when msbuild restore nugget packages from our local (subnetwork) nugget repository.
I tried installing another docker runner on another Windows host and same issue. I got this error:
error : Unable to load the service index for source
http://skirnir/NuGet/index.json error : No such host is known
Skirnir is a local server where my nugget packages are hosted.
I tried from my job to "curl http://login:pwd#skirnir/NuGet/index.json" and it works so when msbuild is executed from within the container using mono image it can access to Skirnir! But msbuild restore always fails to fetch packages.
I tried to replace skirnir with its IP but same problem because index.json reference skirnir in "#id": "http://skirnir/NuGet/".
Here is my gitlab job:
cache:
key: "$CI_JOB_NAME"
paths:
- packages
variables:
MAIN_DIR: "Proj1"
before_script:
- nuget help > /dev/null
- cp $HOME/.config/NuGet/NuGet.Config .
- nuget sources add -Name skirnir_ci -Source http://skirnir/NuGet/index.json -UserName $NUGET_USER -Password $NUGET_PASSWORD -configfile $(pwd)/NuGet.Config
- export NUGET_PACKAGES=$(pwd)/packages
build pcl:
image: mono
stage: build
tags:
- docker
retry: 2
only:
- tags
- branches
- web
variables:
PROJECT: "Proj1"
script:
- export MSBUILD_PROP="/p:Configuration=Release;RestoreConfigFile=$(pwd)/NuGet.Config"
- msbuild /t:restore "$MSBUILD_PROP" $MAIN_DIR/$PROJECT/$PROJECT.csproj
- msbuild "$MSBUILD_PROP" $MAIN_DIR/$PROJECT/$PROJECT.csproj
Any idea?

npm run build in azure pipeline does not create build folder in the azure repository?

I am basically trying to create build and release pipelines for a react js app.The tasks in the build pipeline include npm install,npm run build,then build and push a docker image using dockerfile(nginx serving the build folder).Then in the release pipeline I want to do a kubectl apply on the nginx yaml.
My problem is that the task npm run build is not creating the build folder in the azure repos which is where I pushed my code into.
I tried removing the line "#production /build" from the file gitignore from the azure repos.
Dockerfile used for building image
FROM nginx
COPY /build /usr/share/nginx/html
since the build folder was not created in the azure repos,the build docker image task in the build pipeline keeps failing.Please help
Here is a contributor in a case with similar issue giving a solution.
His solution is:
For "npm build" task, the custom command (In question above, tried
"build" and "npm run-script build") should be "run-script build". The
build has successfully created the dist folder.
For details ,you can refer to this case.

why is my outdir from my kcov command always empty?

so, i need to integrate kcov in my gitlab-ci to see code coverage on a test executable executable. the documentation from kcov states that i need to run "kcov /path/to/outdir ./myexec" to generate a report in an html file. however, even if the command succedes, /path/to/outdir is still empty and i dont know why since the tests pass and kcov returns no errors
here is the .gitlab-ci.yml:
stage: coverage
dependencies:
- Build
script:
- mkdir build/test/kcov
- cd build/test
- kcov --include-path=../../src /kcov ./abuse-test
- cd kcov
- ls
artifacts:
paths:
- TP3/build
- TP3/src
my test exec is abuse-test, it is generated via cmake->make and is in a folder called TP3->build->test->abuse-test
the output of the console in the ci is the following:
on igl601-runner3 5d2b3c01
Using Docker executor with image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Pulling docker image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Using docker image sha256:c2cf0a7c10687670c7b28ee23ac06899de88ebb0d86e142bfbf65171147fc167 for depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Running on runner-5d2b3c01-project-223-concurrent-0 via dinf-prj-16...
Fetching changes...
Removing TP3/build/
HEAD is now at b2e1277 Update .gitlab-ci.yml
From https://depot.dinf.usherbrooke.ca/e19-igl601/eq09
b2e1277..7cf0af5 master -> origin/master
Checking out 7cf0af56 as master...
Skipping Git submodules setup
Downloading artifacts for Build (8552)...
Downloading artifacts from coordinator... ok id=8552 responseStatus=200 OK token=Pagxjp_C
$ cd TP3
$ mkdir build/test/kcov
$ cd build/test
$ kcov --include-path=../../src /kcov ./abuse-test
===============================================================================
All tests passed (3 assertions in 3 test cases)
$ cd kcov
$ ls
Uploading artifacts...
TP3/build: found 2839 matching files
TP3/src: found 211 matching files
Uploading artifacts to coordinator... ok id=8554 responseStatus=201 Created token=PxDHHjxf
Job succeeded
the kcov documentation states: "/path/to/outdir will contain lcov-style HTML output generated continuously while the application run"
and yet, when i browse the artefacts, i find nothing
Hi it looks like you're specifying /kcov as the outdir:
kcov --include-path=../../src /kcov ./abuse-test
Since you're working on a *nix based system, the / implies an absolute path from the root of your filesystem.
The cd kcov step assumes a relative path (down from your current directory) since it is missing the /.
So I guess changing your kcov command to:
kcov --include-path=../../src kcov ./abuse-test
Would fix your issue.

Resources