I am trying to deploy an AngularJS app onto a Divshot hosting through Travis CI.
This app contains a /dist directory which is:
where the result of the Grunt build goes (as it does in local)
.gitignored, therefore not pushed (Travis has to rebuild it)
set as the Divshot app's root directory
Travis installs deps and runs the build nicely, thanks to this .travis.yml file:
language: node_js
node_js:
- '0.10'
install:
- "npm install"
- "gem install compass"
- "bower install"
script:
- "grunt build"
deploy:
provider: divshot
environment:
master: development
api_key:
secure: ...
skin_cleanup: true
But when it comes to the deployment, Travis says:
Error: Your app's root directory does not exists.
It's actually a message from divshot-cli because the /dist dir does not exist. I get the exact same message when I do a divshot push in local after having removed the /dist dir.
Here is a build which cannot deploy: https://travis-ci.org/damrem/anm-client/builds/35582994
How come the /dist dir does not exist on the Travis VM after install & building run OK?
Notes:
I tried to replace the script step by a before_deploy step, but the problem remains (build #13).
If I push a pre-existing /dist folder with an index.html in it, it deploys nicely (https://travis-ci.org/damrem/anm-client/builds/35583904)
It looks like this might be caused by a typo. At the bottom you have skin_cleanup: true when it should be skip_cleanup: true. If skip cleanup isn't turned on Travis will "reset" the code to the exact state of git checkout before running deploy.
Related
I was wondering if someone could help me with my CI/CD configuration for a multi-JS project setup.
Project js-core has some core JS libaries, and project vue-core has some reusable vue components. They are both published as npm packages to the "js-core" project's repository (so that we can use a deploy token with npm publish, as opposed to using a group deploy token, where you have to script the package creation and push directly to the API).
vue-core has a dependency of js-core, so it needs to access the npm package registry of the js-core project to download it inside of the Docker CI/CD instance.
However, gitlab does not allow me to override the package registry's URL. According to some google research, setting the values with yarn config set #myorg:registry <gitlab-url> / yarn config set //gitlab.com/api/v4/... <deploy token> or npm config set #myorg:registry <gitlab-url> / npm config set //gitlab.com/api/v4/... <deploy token> should work. However I can see that it is still trying to download the packages from the vue-core package registry, even though it is disabled as a feature there.
This is the part of my .gitlab-ci.yml that runs before it fails:
image: node:17-alpine
stages:
- build
- test
before_script:
- yarn config set #myorg:registry https://gitlab.com/api/v4/projects/${NPM_PACKAGES_PROJECT_ID}/packages/npm/
- yarn config set //gitlab.com/api/v4/projects/${NPM_PACKAGES_PROJECT_ID}/packages/npm/:_authToken ${NPM_PACKAGES_PROJECT_TOKEN}
- yarn install
It fails at yarn install with:
[2/4] Fetching packages...
error An unexpected error occurred: "https://gitlab.com/api/v4/projects/<vue-core_project_id>/packages/npm/#myorg/js-core/-/#myorg/js-core-1.0.0.tgz: Request failed \"404 Not Found\"".
Where the project ID should be the value of <js-core_project_id>.
I have tried writing to all possible .npmrc file paths (~/.npmrc, ./.npmrc, ${CI_BUILD_DIR}/.npmrc, setting the values with npm config set and yarn config set, deactivating the packages feature in the gitlab project itself. I have also not found any predefined environment variables that would override my configs (see https://docs.gitlab.com/ee/ci/variables/predefined_variables.html). #myorg is correct and matches the gitlab url, as the actual value for "myorg" is a single word...
I am pretty much out of ideas at this point, any help is appreciated!
Update 1: I don't think it is a npm / yarn issue, maybe a gitlab caching problem? If I execute npm config ls -l and yarn config list the correct URLs are output in a format that works locally. I will attempt to clear the yarn cache (globally and locally) and pray that that works.
I'm attempting to get aws code-pipeline and code-build working for a .Net framework web application.
My buildspec looks like this...
version: 0.2
env:
variables:
PROJECT: TestCodeBuild1
DOTNET_FRAMEWORK: 4.7.2
phases:
build:
commands:
- dir
- nuget restore $env:PROJECT.sln
- msbuild $env:PROJECT.sln /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release /p:DeployIisAppPath="Default Web Site" /p:PackageAsSingleFile=false /p:OutDir=C:\codebuild\artifacts\ /t:Package
artifacts:
files:
- '**/*'
base-directory: 'C:\codebuild\artifacts\_PublishedWebsites\${env:PROJECT}_Package\Archive\'
The "dir" line in the buildspec was put there just to confirm it's is in the correct directory, and the required folders are there, which they are.
It's using the following image for the build environment...
mcr.microsoft.com/dotnet/framework/sdk:4.7.2
When it runs, I get the following warning, when it gets to the nuget restore...
WARNING: Error reading msbuild project information, ensure that your input solution or project file is valid. NETCore and UAP projects will be skipped, only packages.config files will be restored.
The msbuild then subsequently also fails, but I'm guessing that may be related to the fact that nuget restore hasn't worked correctly.
I've confirmed that the project and solution is correct. If I run "nuget restore" from my local environment it works fine, without any errors or warnings.
I thought perhaps it was something particular to the docker environment, so I tried creating a Dockerfile like so...
FROM mcr.microsoft.com/dotnet/framework/sdk:4.7.2
WORKDIR /app
COPY . .
RUN nuget restore
This also works fine. "docker build ." runs without any warnings.
So I'm not sure why this is failing in code-build. As far as I can see everything is correct, and I haven't found any way to reproduce the issue locally.
Other forum posts just suggest correcting package issues in the project files. However there are no issues as far as I can see with the project packages.
Here is what I did and it seems to have worked:
1.: buildspec.yml
version: 0.2
env:
variables:
SOLUTION: .\MyProject.sln
PACKAGE_DIRECTORY: .\packages
DOTNET_FRAMEWORK: 4.6.2
PROJECT: MyProjectName
phases:
build:
commands:
- .\nuget restore
- >-
msbuild $env:PROJECT.csproj
/p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK
/p:Configuration=Release /p:DeployIisAppPath="MySite"
/p:PackageAsSingleFile=false /p:OutDir=C:\codebuild\artifacts\
/t:Package
artifacts:
files:
- '**/*'
base-directory: 'C:\codebuild\artifacts\_PublishedWebsites\${env:PROJECT}_Package\Archive\'
I then downloaded nuget.exe (https://www.nuget.org/downloads) and put it into the root directory (the same directory as the .sln file)
When I pushed my commit to the branch on AWS, ".\nuget.exe" successfully restored all packages. I am using the Docker image "microsoft/aspnet:4.6.2".
Hope it helps someone.
so, i need to integrate kcov in my gitlab-ci to see code coverage on a test executable executable. the documentation from kcov states that i need to run "kcov /path/to/outdir ./myexec" to generate a report in an html file. however, even if the command succedes, /path/to/outdir is still empty and i dont know why since the tests pass and kcov returns no errors
here is the .gitlab-ci.yml:
stage: coverage
dependencies:
- Build
script:
- mkdir build/test/kcov
- cd build/test
- kcov --include-path=../../src /kcov ./abuse-test
- cd kcov
- ls
artifacts:
paths:
- TP3/build
- TP3/src
my test exec is abuse-test, it is generated via cmake->make and is in a folder called TP3->build->test->abuse-test
the output of the console in the ci is the following:
on igl601-runner3 5d2b3c01
Using Docker executor with image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Pulling docker image depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Using docker image sha256:c2cf0a7c10687670c7b28ee23ac06899de88ebb0d86e142bfbf65171147fc167 for depot.dinf.usherbrooke.ca:4567/e19-igl601/eq09/image_tp3 ...
Running on runner-5d2b3c01-project-223-concurrent-0 via dinf-prj-16...
Fetching changes...
Removing TP3/build/
HEAD is now at b2e1277 Update .gitlab-ci.yml
From https://depot.dinf.usherbrooke.ca/e19-igl601/eq09
b2e1277..7cf0af5 master -> origin/master
Checking out 7cf0af56 as master...
Skipping Git submodules setup
Downloading artifacts for Build (8552)...
Downloading artifacts from coordinator... ok id=8552 responseStatus=200 OK token=Pagxjp_C
$ cd TP3
$ mkdir build/test/kcov
$ cd build/test
$ kcov --include-path=../../src /kcov ./abuse-test
===============================================================================
All tests passed (3 assertions in 3 test cases)
$ cd kcov
$ ls
Uploading artifacts...
TP3/build: found 2839 matching files
TP3/src: found 211 matching files
Uploading artifacts to coordinator... ok id=8554 responseStatus=201 Created token=PxDHHjxf
Job succeeded
the kcov documentation states: "/path/to/outdir will contain lcov-style HTML output generated continuously while the application run"
and yet, when i browse the artefacts, i find nothing
Hi it looks like you're specifying /kcov as the outdir:
kcov --include-path=../../src /kcov ./abuse-test
Since you're working on a *nix based system, the / implies an absolute path from the root of your filesystem.
The cd kcov step assumes a relative path (down from your current directory) since it is missing the /.
So I guess changing your kcov command to:
kcov --include-path=../../src kcov ./abuse-test
Would fix your issue.
I have a single BitBucket repository containing the code for an Angular app in a folder called ui and a Node API in a folder called api.
My BitBucket pipeline runs ng test for the Angular app, but the node_modules folder isn't being cached correctly.
This is my BitBucket Pipeline yml file:
image: trion/ng-cli-karma
pipelines:
default:
- step:
caches:
- angular-node
script:
- cd ui
- npm install
- ng test --watch=false
definitions:
caches:
angular-node: /ui/node_modules
When the builds runs it shows:
Cache "angular-node": Downloading
Cache "angular-node": Extracting
Cache "angular-node": Extracted
But when it performs the npm install step it says:
added 1623 packages in 41.944s
I am trying to speed the build up and I can't work out why npm needs to install the dependencies assuming they are already contained in the cache which has been restored.
my guess is, your cache position is not correct. there is a pre-configured node cache (named "node") that can just be activated. no need to do a custom cache for that. (the default cache fails, because your node build is in a sub folder of the clone directory, so you need a custom cache)
cache positons are relative to the clone directory. bitbucket clones into /opt/atlassian/pipelines/agent/build thats probably why your absolute cache-path did not work.
simply making the cache reference relative should do the trick
pipelines:
default:
- step:
caches:
- angular-node
script:
- cd ui
- npm install
- ng test --watch=false
definitions:
caches:
angular-node: ui/node_modules
that may fix your issue
I am running a CI pipeline to build firmware for ESP8266 using plaitformio and bitbucket pipelines, my code builds successfully and now I want to cache the directory that contains the platformio libraries (.piolibdeps). Here are the contains of my platform.ini file.
[env:nodemcuv2]
platform = espressif8266
board = nodemcuv2
framework = arduino
upload_port = 192.168.1.108
lib_deps =
ESPAsyncTCP#1.1.0
OneWire
Time
FauxmoESP
Blynk
DallasTemperature
ArduinoJson
Adafruit NeoPixel
How to cache this directory in BitBucket pipelines? Please see below the contents of bitbucket-pipelines.yml file, with this it is not caching the defined directory, what's wrong here?
image: eclipse/platformio
pipelines:
branches:
develop:
- step:
name: Build Project
caches: # caches the depende
- directories
script: # Modify the commands below to build your repository.
- pio ci --project-conf=./Code/UrbanAquarium.Firmware/platformio.ini ./Code/UrbanAquarium.Firmware/src
- pwd
definitions:
caches:
directories: ./Code/UrbanAquarium.Firmware/.piolibdeps
And here my folder structure.
in case you're still looking for an answer - I think you got it almost right, but probably need to specify a custom --build-dir (so that you can specify the same path for your cache) as well as --keep-build-dir (see https://docs.platformio.org/en/latest/userguide/cmd_ci.html). Also, I'm not sure why you specified a ./Code/UrbanAquarium.Firmware/ prefix.
That said, I've tried the above and it became quickly ugly - for now I'll only cache ~/.platformio, as well as the default pip cache:
image: python:2.7.16
pipelines:
default:
- step:
caches:
- pip
- pio
script:
- pip install -U platformio
- platformio update
- platformio ci src/ --project-conf=platformio.ini
definitions:
caches:
pio: ~/.platformio