For months I’m using several shared runners to run my GitLab CI job. All my runners are running on linux host or macOS and it works great! This job is compiling .Net Standard project using mono:latest image.
I use msbuild with nugget restore and here is my problem. I recently added my first shared runner on Windows, still using docker runner as usual such as described here (which is the way I always do). But with this new Windows shared runner and only this one my job fails when msbuild restore nugget packages from our local (subnetwork) nugget repository.
I tried installing another docker runner on another Windows host and same issue. I got this error:
error : Unable to load the service index for source
http://skirnir/NuGet/index.json error : No such host is known
Skirnir is a local server where my nugget packages are hosted.
I tried from my job to "curl http://login:pwd#skirnir/NuGet/index.json" and it works so when msbuild is executed from within the container using mono image it can access to Skirnir! But msbuild restore always fails to fetch packages.
I tried to replace skirnir with its IP but same problem because index.json reference skirnir in "#id": "http://skirnir/NuGet/".
Here is my gitlab job:
cache:
key: "$CI_JOB_NAME"
paths:
- packages
variables:
MAIN_DIR: "Proj1"
before_script:
- nuget help > /dev/null
- cp $HOME/.config/NuGet/NuGet.Config .
- nuget sources add -Name skirnir_ci -Source http://skirnir/NuGet/index.json -UserName $NUGET_USER -Password $NUGET_PASSWORD -configfile $(pwd)/NuGet.Config
- export NUGET_PACKAGES=$(pwd)/packages
build pcl:
image: mono
stage: build
tags:
- docker
retry: 2
only:
- tags
- branches
- web
variables:
PROJECT: "Proj1"
script:
- export MSBUILD_PROP="/p:Configuration=Release;RestoreConfigFile=$(pwd)/NuGet.Config"
- msbuild /t:restore "$MSBUILD_PROP" $MAIN_DIR/$PROJECT/$PROJECT.csproj
- msbuild "$MSBUILD_PROP" $MAIN_DIR/$PROJECT/$PROJECT.csproj
Any idea?
Related
I'm trying to setup a GitLab CI pipeline on a Docker runner.
I have a Docker runner with a Maven image (maven:3.8.6-openjdk-11) that I'm trying to use for my pipeline that compiles a Maven project.
I have set maven repository cache but I feel like this cache is not working properly. Every time my pipeline runs, it downloads the dependencies that are useful for compiling my project, onto my Nexus artifactory... I expected it to download only the "new" dependencies, and for the others, they use the cache without downloading them.
Below is the content of the .gitlab-ci.yml file:
variables:
MAVEN_OPTS: >-
-Dhttps.protocols=TLSv1.2
-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository
-Dorg.slf4j.simpleLogger.showDateTime=true
-Djava.awt.headless=true
cache:
paths:
- .m2/repository/
build:
tags:
- tdev
script:
- mvn --settings $MAVEN_SETTINGS clean compile
On each run it download the dependencies:
[...]
Downloading from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/core/wildfly-core-security/19.0.0.Final/wildfly-core-security-19.0.0.Final.jar
Downloaded from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/core/wildfly-controller-client/19.0.0.Final/wildfly-controller-client-19.0.0.Final.jar (214 kB at 70 kB/s)
Downloading from infogreffe: https://xxxx/nexus/repository/infogreffe/org/wildfly/security/wildfly-elytron-auth/2.0.0.Final/wildfly-elytron-auth-2.0.0.Final.jar
Downloaded from infogreffe: https://xxxx/nexus/repository/infogreffe/com/oracle/ojdbc5/11.2.0.2.0/ojdbc5-11.2.0.2.0.jar (2.0 MB at 666 kB/s)
[...]
Note that I use a local cache.
Thanks.
I expect that with using a cache for my maven repository, it won't try to download dependencies on every pipeline run unless there are new dependencies. But maybe I misunderstood how caching works...
I'm attempting to get aws code-pipeline and code-build working for a .Net framework web application.
My buildspec looks like this...
version: 0.2
env:
variables:
PROJECT: TestCodeBuild1
DOTNET_FRAMEWORK: 4.7.2
phases:
build:
commands:
- dir
- nuget restore $env:PROJECT.sln
- msbuild $env:PROJECT.sln /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release /p:DeployIisAppPath="Default Web Site" /p:PackageAsSingleFile=false /p:OutDir=C:\codebuild\artifacts\ /t:Package
artifacts:
files:
- '**/*'
base-directory: 'C:\codebuild\artifacts\_PublishedWebsites\${env:PROJECT}_Package\Archive\'
The "dir" line in the buildspec was put there just to confirm it's is in the correct directory, and the required folders are there, which they are.
It's using the following image for the build environment...
mcr.microsoft.com/dotnet/framework/sdk:4.7.2
When it runs, I get the following warning, when it gets to the nuget restore...
WARNING: Error reading msbuild project information, ensure that your input solution or project file is valid. NETCore and UAP projects will be skipped, only packages.config files will be restored.
The msbuild then subsequently also fails, but I'm guessing that may be related to the fact that nuget restore hasn't worked correctly.
I've confirmed that the project and solution is correct. If I run "nuget restore" from my local environment it works fine, without any errors or warnings.
I thought perhaps it was something particular to the docker environment, so I tried creating a Dockerfile like so...
FROM mcr.microsoft.com/dotnet/framework/sdk:4.7.2
WORKDIR /app
COPY . .
RUN nuget restore
This also works fine. "docker build ." runs without any warnings.
So I'm not sure why this is failing in code-build. As far as I can see everything is correct, and I haven't found any way to reproduce the issue locally.
Other forum posts just suggest correcting package issues in the project files. However there are no issues as far as I can see with the project packages.
Here is what I did and it seems to have worked:
1.: buildspec.yml
version: 0.2
env:
variables:
SOLUTION: .\MyProject.sln
PACKAGE_DIRECTORY: .\packages
DOTNET_FRAMEWORK: 4.6.2
PROJECT: MyProjectName
phases:
build:
commands:
- .\nuget restore
- >-
msbuild $env:PROJECT.csproj
/p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK
/p:Configuration=Release /p:DeployIisAppPath="MySite"
/p:PackageAsSingleFile=false /p:OutDir=C:\codebuild\artifacts\
/t:Package
artifacts:
files:
- '**/*'
base-directory: 'C:\codebuild\artifacts\_PublishedWebsites\${env:PROJECT}_Package\Archive\'
I then downloaded nuget.exe (https://www.nuget.org/downloads) and put it into the root directory (the same directory as the .sln file)
When I pushed my commit to the branch on AWS, ".\nuget.exe" successfully restored all packages. I am using the Docker image "microsoft/aspnet:4.6.2".
Hope it helps someone.
After having everything ready to deploy, I realized JHipster doesn't have a Dockerfile anymore and the packaging is done with jib. The generated gitlab-ci.yml has a docker-push stage with a command like this:
./mvnw jib:build -Djib.to.image=$IMAGE_TAG -Djib.to.auth.username=gitlab-ci-token -Djib.to.auth.password=$CI_BUILD_TOKEN
but it fails with
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:0.9.11:build (default-cli) on project test-project: Obtaining project build output files failed; make sure you have compiled your project before trying to build the image. (Did you accidentally run "mvn clean jib:build" instead of "mvn clean compile jib:build"?): /builds/amatos/test-project/target/classes -> [Help 1]
As that failed, I tried to run the command locally like this:
./mvnw jib:build -Djib.to.image=registry.gitlab.com/amatos/test-project:v6.0.1 -Djib.to.auth.username=amatos -Djib.to.auth.password=password
but instead of trying to connect to Gitlab's registry, it tries to connect to registry.hub.docker.com:
[INFO] Retrieving registry credentials for registry.hub.docker.com...
What I would like to know is: how do I set it to connect to Gitlab instead of Docker Hub?
In order to connect to a custom repository, I changed -Djib.to.image to -Dimage and it worked
This is followed by jhipster/generator-jhipster issue 9761 which states:
as the docker-push is done in another stage, there is a missing - target/classes in the previous stage.
It is needed by jib. It should look like:
maven-package:
stage: package
script:
- ./mvnw verify -Pprod -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME
artifacts:
paths:
- target/*.jar
- target/classes
expire_in: 1 day
Possibly addressed by PR (merged) 9762, commit 50cc009, which is only in master, not yet referenced by any tag.
I'm using a bitbucket docker pipeline to validate my builds for an android app on push. One of my dependancies is a private package which I am hosting on another bitbucket repository. For builds on the user machine, I use Gradle's private maven repository plugin which can resolve my dependency with encrypted username and password.
This works well for developer machines, but I want to avoid hard coding usernames and passwords in the pipeline. Instead, since bitbucket support sshkeys across repositories for authentication, I have built in my pipeline script to clone the repository with the my private packages, and copy them over to the gradle cache. I have tried both:
/home/gradle/.gradle/caches/modules-2/files-2.1/com.mycompany
~/.gradle/caches/modules-2/files-2.1/com.mycompany
as caches. The clone and copy work just fine as I can see the files in their respective directories in the cache with by adding an ls in the pipeline, but gradlew still tries to go to the internet (the other bitbucket repository) to resolve the dependancies, as if there is no cache. Further, i'm using gradle docker image gradle:3.4.1 (the version of gradle in my project level build.gradle file), but gradle build fails with a google() is not a function.
Gradlew build fails trying to resolve my package.pom file, with a lack of a username (because there is no gradle.properties in the pipeline). But why doesn't it use the cache, instead of trying to go to the repository?
I have tried the standard java:8 docker image, the gradle docker image up to 5.1.1, and i have tried copying the package files into various gradle caches in the docker image. I have also tried altering permissions with chmod 775 to no avail. I have also tried gradlew assembleDebug with the same results as gradlew build. I'm a bit new to gradle, docker and bitbucket so i'm not sure what is causing the issue.
image: gradle:3.4.1
pipelines:
default:
- step:
caches:
- gradle
- android-sdk
script:
# Download and unzip android sdk
- wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/sdk-tools-linux-3859397.zip
- unzip -o -qq android-sdk.zip -d android-sdk
# Define Android Home and add PATHs
- export ANDROID_HOME="/opt/atlassian/pipelines/agent/build/android-sdk"
- export PATH="$ANDROID_HOME/tools:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$PATH"
# Download packages.
- yes | sdkmanager "platform-tools"
- yes | sdkmanager "platforms;android-27"
- yes | sdkmanager "build-tools;27.0.3"
- yes | sdkmanager "extras;android;m2repository"
- yes | sdkmanager "extras;google;m2repository"
- yes | sdkmanager "extras;google;instantapps"
- yes | sdkmanager --licenses
# Build apk
- git clone git#bitbucket.org:myorg/myrepo.git
- scp -r myrepo/com/mycomp/* /home/gradle/.gradle/caches/modules-2/files-2.1/com.mycomp
#- gradle build
- ./gradlew build
#- ./gradlew assembleDebug
definitions:
caches:
android-sdk: android-sdk
Gradlew build error:
> Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Could not resolve com.mycomp:mypackage:1.0.1.
Required by:
project :app
> Could not resolve com.mycomp:mypackage:1.0.1.
> Could not get resource 'https://bitbucket.org/myorg/myrepo/raw/releases/com/mycomp/mypackage/1.0.11/package-1.0.1.pom'.
> Username may not be null
I'm trying to setup a jenkins job to automate builds of our aurelia app. When I build the app locally it works 100%, but when I build it on jenkins using the same process, then the files in the script directory is different.
I'm running Debian testing on my laptop. The Jenkins is a docker image running on Rancher.
When I test the build with jenkins, I get the following error:
http://localhost/src/main.js Failed to load resource: the server responded with a status of 404 (Not Found)
vendor-bundle.js:2 Error: Script error for "main"(…)r # vendor-bundle.js:2
Both my local environment and jenkins have the following tools and versions:
node: 6.9.1
npm: 3.10.8
aurelia: 0.22.0
The app is written in typescript.
we are also using aurelia-cli to build the app using : "au build --env prod"
The process we use to build the app:
npm install aurelia-cli
npm install
typings install
au build --env prod
Any help appreciated