code coverage tab not showing TFS 2015 build - tfs

I am building some code using tfs 2015, running some karma tests and producing a Cobetura summary file with the karma-coverage reporter with karma.config as:
coverageReporter: {
dir: 'testResults/stubs',
includeAllSources: true,
reporters: [
{ type: 'html', subdir: 'CoverageReporter' },
{ type: 'cobertura', subdir: 'cobetura', file: 'cobertura.xml' },
{ type: 'text', subdir: '.', file: 'testResults.txt' },
{ type: 'text-summary', subdir: '.', file: 'testSummary.txt' }
]
}
I then publish the coverage results in the build definition as:
but in the build summary there is no code coverage tab to display the results:
the artifact is created with the data in though, so can be downloaded and viewed correctly.
I have seen many posts showing the code coverage tab, but I cant seem to get it to show. Please help.
output from build:
2018-11-21T12:02:06.8135130Z Executing the powershell script: C:\agent\tasks\PublishCodeCoverageResults\1.0.3\PublishCodeCoverageResults.ps1
2018-11-21T12:02:07.0166408Z ##[debug]Entering PublishCodeCoverage.ps1
2018-11-21T12:02:07.0322620Z ##[debug]codeCoverageTool = Cobertura
2018-11-21T12:02:07.0322620Z ##[debug]summaryFileLocation = C:\agent\_work\55\s\testResults\stubs\cobetura\cobertura.xml
2018-11-21T12:02:07.0322620Z ##[debug]reportDirectory = C:\agent\_work\55\s\testResults\stubs\CoverageReporter
2018-11-21T12:02:07.0322620Z ##[debug]additionalCodeCoverageFiles =
2018-11-21T12:02:07.0478883Z Starting 'Publish-CodeCoverage' cmdlet...
2018-11-21T12:02:07.1416621Z Publishing coverage summary data to TFS server.
2018-11-21T12:02:07.2822777Z Publishing additional files to TFS server.
2018-11-21T12:02:07.4854044Z Max Concurrent Uploads 1, Max Creators 1
2018-11-21T12:02:07.5322833Z Found 38 files to upload.
2018-11-21T12:02:07.5322833Z Files found locally 38,
2018-11-21T12:02:07.5322833Z Files evaluated 0,
2018-11-21T12:02:07.5322833Z Files left to evaluate 38.,
2018-11-21T12:02:07.5322833Z Files created without upload 0,
2018-11-21T12:02:07.5479049Z Files uploaded 0
2018-11-21T12:02:07.5479049Z Files left to process 38
2018-11-21T12:02:07.5479049Z ---------------------------
2018-11-21T12:02:09.5949006Z Files found locally 38,
2018-11-21T12:02:09.5949006Z Files evaluated 38,
2018-11-21T12:02:09.5949006Z Files left to evaluate 0.,
2018-11-21T12:02:09.5949006Z Files created without upload 0,
2018-11-21T12:02:09.5949006Z Files uploaded 35
2018-11-21T12:02:09.5949006Z Files left to process 3
2018-11-21T12:02:09.5949006Z ---------------------------
2018-11-21T12:02:11.6109203Z Created 0 files without uploading content. Total files processed 38
2018-11-21T12:02:11.6418363Z Uploaded artifact 'C:\agent\_work\55\s\testResults\stubs\CoverageReporter' to container folder 'Code Coverage Report_13389' of build 13389.
2018-11-21T12:02:11.7824652Z Associated artifact 27182 with build 13389

Related

Using SWC with Bazel

I'm trying to compile my TS files to JS with SWC in Bazel. I'm unfortunately unable to use the rules_js library (which has rules_swc), so I need to hand roll it. So far I have:
load("#npm//#swc/cli:index.bzl", "swc")
SRC_FILES = glob(["**/*.ts"])
swc(
name = "compile_ts",
outs = [s.replace(".ts", ".js") for s in SRC_FILES],
args = [
"$(location %s)" % s
for s in SRC_FILES
],
data = SRC_FILES,
)
I get
Successfully compiled 96 files with swc.
but no output and the following error
ERROR: output {file} was not created
for each src file I passed in. How do I get the compiled files to output to the bazel-out folder so they can be used as inputs to other rules?

Bazel: How to extend existing docker image?

I know in Dockerfile I can extend existing docker image using:
FROM python/python
RUN pip install request
But how to extend it in bazel?
I am not sure if I should use container_import, but with that I am getting the following error:
container_import(
name = "postgres",
base_image_registry = "some.artifactory.com",
base_image_repository = "/existing-image:v1.5.0",
layers = [
"//docker/new_layer",
],
)
root#ba5cc0a3f0b7:/tcx# bazel build pkg:postgres-instance --verbose_failures --sandbox_debug
ERROR: /tcx/docker/postgres-operator/BUILD.bazel:12:17: in container_import rule //docker/postgres-operator:postgres:
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/2f47bbce04529f9da11bfed0fc51707c/external/io_bazel_rules_docker/container/import.bzl", line 98, column 35, in _container_import_impl
"config": ctx.files.config[0],
Error: index out of range (index is 0, but sequence has 0 elements)
ERROR: Analysis of target '//pkg:postgres-instance' failed; build aborted: Analysis of target '//docker/postgres-operator:postgres' failed
INFO: Elapsed time: 0.209s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded, 2 targets configured)
container_import is the correct rule to import an existing image. However, all it does is import, it doesn't pull it from anywhere. I think you're looking for container_pull instead, which will pull an image from a repository and then automatically use container_import to translate it for other rules_docker rules.
To add a new layer, use container_image, with base set to the imported image and tars set to the additional files you want to add. Or, if you want to add things in other formats, see the docs for alternates to tars (like debs or files).
Putting it all together, something like this in your WORKSPACE:
container_pull(
name = "postgres",
registry = "some.artifactory.com",
repository = "existing-image",
tag = "v1.5.0",
)
and then this in a BUILD file:
container_image(
name = "postgres_plus",
base = "#postgres//image",
tars = ["//docker/new_layer"],
)
The specific problem you're running into is that container_pull.layers isn't for adding new layers, it's for specifying the layers of the image you're importing. You could import those some other way (http_archive, check in the tar files, etc) and then specify them all by hand instead of using container_pull if you're doing something unusual.

Google Dataflow error on setup with no information

I'm attempting to run a Beam pipeline that has a requirements.txt file. This fails and the logging entry, under worker-startup is:
{
insertId: "3570218088494260896:493966:0:74068"
jsonPayload: {
line: "boot.go:134"
message: "Failed to install packages: failed to install requirements: exit status 1"
}
labels: {
compute.googleapis.com/resource_id: "3570218088494260896"
compute.googleapis.com/resource_name: "jumps-cafc68e2-10261505-b789-harness-kn3b"
compute.googleapis.com/resource_type: "instance"
dataflow.googleapis.com/job_id: "2017-10-26_15_05_55-17840030900137737069"
dataflow.googleapis.com/job_name: "jumps-cafc68e2"
dataflow.googleapis.com/region: "global"
}
logName: "projects/sixty-capital/logs/dataflow.googleapis.com%2Fworker-startup"
receiveTimestamp: "2017-10-26T22:12:41.061522503Z"
resource: {
labels: {
job_id: "2017-10-26_15_05_55-17840030900137737069"
job_name: "jumps-cafc68e2"
project_id: "sixty-capital"
region: "global"
step_id: ""
}
type: "dataflow_step"
}
severity: "CRITICAL"
timestamp: "2017-10-26T22:12:35Z"
}
Is there a way of learning more about what happened?
Looking at the logs from the same process immediately prior show what the process was attempting to do.
In this case, the local process had created tarballs / zip files of packages from requirements.txt and uploaded them to GCS. The remote process installs requirements.txt with a link to those files - this generally allows the installation to avoid downloading anything further.
But if an item in the file is a source that the remote process can't access (either a local path, or a private git repo) rather than a simple package name, the installation fails.

Electron Build Windows Folder Structure

Given an application made in electron. The folder structure would look something like:
App
- assets
-models
- exe files
index.html
main.js
When executing the build following the recommendation of the site by entering the following command:
electron-packager . --overwrite --asar=true --platform=win32 --arch=ia32 --icon=assets/icons/win/icon.ico --prune=true --out=release-builds --version-string.CompanyName=CE --version-string.FileDescription=CE --version-string.ProductName="Electron Tutorial App"
The electron v.1.7.9 creates the build correctly, however it inside the release-builds / resources folder the app.asar file, so all the content that was inside my models folder becomes inaccessible. Inside this folder were .exe files that should be run on demand.
The system then looks for the files in the following url: parth_do_projeto / resources / app.asar / assets / models /, that is, it considers that the app.assar is a folder, but after the app.asar build is a file.
Since there were .exe files inside the original folder, the app.asar can not absorb executables.
What would be the way I keep these .exe files? If you build the build without the --asar parameter, the program works correctly, enter, all my project folder / source code is exposed.
My question is what is the best way to generate the build, keeping the code closed and making use of .exe files?
The short answer to your question is to use the unpackDir option for the asar option inside of electron-packager. Here is a sample of what this might look like:
'use strict';
... ...
var packager = require('electron-packager');
var electronPackage = require('electron/package.json');
var pkg = require('./package.json');
// pull the electron version from the package.json file
var electronVersion = electronPackage.version;
... ...
var opts = {
name: pkg.name,
platform: 'win32',
arch: 'ia32', // ia32, x64 or all
dir: './', // source location of app
out: './edist/', // destination location for app os/native binaries
ignore: config.electronignore, // don't include these directories in the electron app build
icon: config.icon,
asar: {unpackDir: config.excludeFromASAR}, // compress project/modules into an asar blob excluding some things.
overwrite: true,
prune: true,
electronVersion: electronVersion , // Tell the packager what version of electron to build with
appCopyright: pkg.copyright, // copyright info
appVersion: pkg.version, // The version of the application we are building
win32metadata: { // Windows Only config data
CompanyName: pkg.authors,
ProductName: pkg.name,
FileDescription: pkg.description,
OriginalFilename: pkg.name + '.exe'
}
};
// Build the electron app
gulp.task('build:electron', function (cb) {
console.log('Launching task to package binaries for ' + opts.name + ' v' + opts['appVersion']);
packager(opts, function (err, appPath) {
console.log(' <- packagerDone() ' + err + ' ' + appPath);
console.log(' all done!');
cb();
});
});

how to deploy your dart app (using Web ui) without using Pub Deploy

What is the best strategy to deploy a Dart Web-ui app manually ?
pub deploy doesn't work for me and I have raised bug report. So am thinking what is the best way to manually deploy.
This is how I started:
1) From project root I compile the webui components (dwc.dart)
2) change directory to web/out then run dart2js
3) copy all .js files into that scripts/js public folder on server
4) copy appname.html to server changing css and script paths to option 3
5) Make sure dart.js is also in the same directory as item 3
this is as far as I got. So what else do I need to do ?
A few questions:
1) Do I manually change the file paths in the generated .js files to point to public folders on server for the files they are referencing and make sure those files are on server also ?
2) Do I need to copy all packages to server also ?
3) Any preferred file structure on server?
Any tips on this really appreciated.
Thanks.
I wrote a Grunt script for it (since I had no time to look up how to properly write code for Grunt, I did not share the code since it's a mess) but I basically do this:
compiling a list of files with dwc to a given out dir
compile it to javascript
clean up all non-deployable files
change some paths inside the HTML to match the server paths (for some reasons, this gets changed by the compilation process)
remove all packages except the ones I really need (JS interopt and browser)
Since I'm only using the JS version, I remove all dart packages. Since the paths inside the HTML files are up to you, you can already use a structure that suits you/your server.
I can provide you with a Grunt script to understand the order of tasks. Practically the order I use is this one:
Create the build directory. I usually use /build/web. I usually create these files (index.html, main.dart, /css and so on into the /web dir). I create the rest of components into /lib directory.
Compile the .dart file that contains the main() function ("main.dart" in my case for simpler projects) file to Javascript and put it into /build/web directory
Copy the other needed files and folders to the /build/web directory. Also, during this process you'll be copying the packages that your project needs. You'll see in the example provided below.
Remove all empty folders from the project
You can create a Grunt task to open the /index.html file in the browser once the building process has ended (I will not provide this example)
The structure of the dart test project:
testApp
- gruntfile.js
- package.js
/lib
/packages
/angular
/web
- index.html
- main.dart
/css
/img
So, the Grunt example script to cover steps from 1 - 4 looks like this (copy it to gruntfile.js):
module.exports = function (grunt) {
grunt.initConfig({
// 1.
// create build web directory
mkdir: {
build: {
options: {
create: ['build/web']
}
}
},
// 2.
// compile dart files
dart2js: {
options: {
// use this to fix a problem into dart2js node module. The module calls dart2js not dart2js.bat.
// this is needed for Windows. So use the path to your dart2js.bat file
"dart2js_bin": "C:/dart/dart-sdk/bin/dart2js.bat"
},
compile: {
files: {'build/web/main.dart.js': 'web/main.dart'}
}
},
// 3.
// copy all needed files, including all needed packages
// except the .dart files.
copy: {
build: {
files: [
{
expand: true,
src: [
'web/!(*.dart)',
'web/css/*.css',
'web/res/*.svg',
'web/packages/angular/**/!(*.dart)',
'web/packages/browser/**/!(*.dart)'
],
dest: 'build'
}
]
}
},
// 4.
// remove empty directories copied using the previous task
cleanempty: {
build: {
options: {
files: false
},
src: ['build/web/packages/**/*']
}
},
});
require('matchdep').filterDev('grunt-*').forEach(grunt.loadNpmTasks);
grunt.registerTask('default', [
'mkdir:build',
'dart2js',
'copy:build',
'cleanempty:build'
]);
};
So this is the Grunt script example.
Create a /gruntfile.js file into your project's root directory and copy/paste the script to it.
Create a /package.json file into your project's root directory and copy/paste the following script:
{
"name": "testApp",
"version": "0.0.1",
"description": "SomeDescriptionForTheTestApp",
"main": "",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "YourName",
"peerDependencies": {
"grunt-cli": "^0.1.13"
},
"devDependencies": {
"grunt": "^0.4.5",
"grunt-cleanempty": "^1.0.3",
"grunt-contrib-copy": "^0.7.0",
"grunt-dart2js": "0.0.5",
"grunt-mkdir": "^0.1.2",
"matchdep": "^0.3.0"
}
}
Open Command Prompt in Windows, Terminal in Linux, navigate to your project's root directory and use this command:
npm install
Wait untill all Grunt modules needed will be downloaded to your local project. Once this is finished, issue this command in Command Prompt or Terminal:
node -e "require('grunt').cli()"
You can use this to initiate Grunt default task without having Grunt installed globally on your system.
Now, to know the exact build structure for your project (including the packages that the project needs), make a build using Pub Build. Then you will be able to instruct Grunt to create the same dir structure.
You can add other tasks (like minification) if you want.
Hope this will help you all to understand the process and get you started with a test app first. Add your comments to make this even better and simplify it even more.

Resources