I am using Bazel on a codebase that uses SpringBoot and JUnit in an airgapped environment. Here are the commands I need to run in order to fetch all the third-party dependencies:
bazel fetch '#bazel_tools//tools/build_defs/repo:*'
bazel fetch '#rules_java//java:*'
bazel fetch '#rules_cc//cc:*'
bazel fetch #local_config_platform//...
bazel fetch #local_config_sh//...
bazel fetch #maven//...
bazel fetch #org_opentest4j_opentest4j//jar:jar
bazel fetch #org_junit_jupiter_junit_jupiter_params//jar:jar
bazel fetch #org_junit_jupiter_junit_jupiter_engine//jar:jar
bazel fetch #org_junit_platform_junit_platform_console//jar:jar
bazel fetch #org_junit_platform_junit_platform_engine//jar:jar
bazel fetch #org_junit_platform_junit_platform_commons//jar:jar
bazel fetch #org_junit_platform_junit_platform_suite_api//jar:jar
bazel fetch #org_junit_platform_junit_platform_launcher//jar:jar
bazel fetch #org_apiguardian_apiguardian_api//jar:jar
bazel fetch #remote_coverage_tools//:coverage_report_generator
bazel fetch #remotejdk11_linux//:jdk
How do I make these dependencies available in an airgapped environment (where I cannot download things from the internet)? Does Bazel have a command that creates a download cache for third-party dependencies similar to https://docs.bazel.build/versions/master/guide.html#running-bazel-in-an-airgapped-environment? Said another way, can Bazel populate a directory that can be shared with other machines and used via --distdir there? Are there other ways to share Bazel's download cache?
Maybe bazel sync can help here. Via this command, you can create a resolved file (e.g. resolved.bzl)
bazel sync --experimental_repository_resolved_file=resolved.bzl
This file is a .json file. With content similar to this:
"repositories": [
{
"rule_class": "#bazel_tools//tools/build_defs/repo:http.bzl%http_archive",
"attributes": {
"url": "",
"urls": [
"https://github.com/embree/embree/archive/v2.16.5.zip"
],
"sha256": "9c4c0c385a7725946648578c1b58e887caa8b4a675a9356e76b9501d9e2e9e42",
"netrc": "",
"auth_patterns": {},
"canonical_id": "",
"strip_prefix": "embree-2.16.5",
"type": "",
"patches": [],
"patch_tool": "",
"patch_args": [
"-p0"
],
"patch_cmds": [],
"patch_cmds_win": [],
"build_file": "//ThirdParty:embree.BUILD",
"build_file_content": "",
"workspace_file_content": "",
"name": "embree"
},
"output_tree_hash": "b96d3a8db2ddd4bbc7ccde297a1d12e8be48c768bddde1ade53a4987861a1fe7"
}
]
You need now a little helper tool (python script - whatever) that visits each url/urls in the determined repositories and adds it to your --distdir folder.
Related
I am new to Terraform so please be kind.
During Build process, Terraform is pushing the docker image to AWS ECR with a new name with every build.
As Image Name is different, we need to create a new Task Definition for each new build.
Is there a way to handle this issue in Terraform?
Any help is appreciated.
If you are ok to replace the task definition each time with the new image, then you can update the image name used in the task definition and Terraform will handle the update for you.
If you need to generate a new task definition each time and leave the old ones in place then read on..
If you do not need to keep the task definition in the Terraform state, then you could remove it after deployment so that next time a new one will be created.
The state rm command removes a resource from the state:
terraform state rm aws_ecs_task_definition.service
If you do need to keep each task definition in the Terraform state, you could create a data source for which you can use the for_each operator to generate a collection of resources for.
As an example you could save the container definitions of each task definition to a JSON file within a folder. Each file looks something like this:
[
{
"name": "a-service",
"image": "my-image",
"cpu": 10,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
Use the fileset function to list files in the folder and generate a new resource for each file using for_each:
resource "aws_ecs_task_definition" "service" {
family = "service"
for_each = fileset(path.module, "task-definitions/*.json")
container_definitions = file("${path.module}/${each.key}")
}
running the release script without publish option tries to publish the build to GitHub ( and fails while complaining about not being able to find GHToken! )
Error: GitHub Personal Access Token is not set, neither programmatically, nor using env "GH_TOKEN"
Setting "publish": "never" will fail also complaining about not being able to find module electron-publisher-never!
Error: Cannot find module 'electron-publisher-never'
It all happens while the project is built but build scripts exits non-zero!
I'm using the latest version of electron-builder.
my build script:
"build": {
"appId": "eu.armand.[****]",
"copyright": "Copyright © 2017 mim_Armand",
"productName": "[****]",
"mac": {
"publish": "never",
"category": "public.app-category.[****]",
"icon": "assets/icons/mac/icon.icns"
}
Any idea what's going on or if I'm doing it wrong?!
,m
try building with
"build": "electron-builder --publish never"
to never publish.
rename your script to something else.
if the script name is release → publish is set to always
the documentation states this:
CLI --publish option values:
...
If npm script named release, — always.
Add to scripts in the development package.json:
"release": "build"
and if you run yarn release, a release will be drafted (if doesn’t
already exist) and artifacts published.
I solved it this way, because I didn't need to put it in any repository
"build":{
"appId": "XXX",
"productName": "XXX",
"directories":{
"output": "build"
},
"win":{
"target": "nsis",
"publish" : []
}
}
https://www.electron.build/configuration/publish
I have a build job in jenkins which builds the project from github for any branch. package will be created in build job workspace with the version as xxxx-yyyyy-2.15.0-SNAPSHOT.zip.
My next artifactory push job has filespec written as below:
{
"files": [
{
"pattern": "/var/lib/jenkins/workspace/Jobname/target/*/xxxx-yyyyy*.zip",
"target": "libs-snapshot-local/xxxx-yyyyy/",
"recursive": "false"
}
]
}
Above filespec recognize the pattern and upload the zip in libs-snapshot-local/xxxx-yyyyy/. But I need to upload the file with folder created with version name available on the zip file xxxx-yyyyy-2.15.0-SNAPSHOT.zip.
Can anybody help me to create a folder dynamically with version name? any idea on how to specify target path in filespec?
The file specs has the ability to use placeholders in order to create more flexible paths.
For example in your case:
{
"files": [
{
"pattern": "/var/lib/jenkins/workspace/Jobname/target/*/(xxxx-yyyyy*).zip",
"target": "libs-snapshot-local/{1}/",
"recursive": "false"
}
]
}
Please pay attention for the parentheses in the pattern and the placeholder marker {1} used in the target.
I have an Electron app where I want to introduce parallel release channels: stable, next (for early adopters) and dev (for testing the latest build).
These will have a branch each, with new features appearing first in dev, progressing to next for beta testing and finally moving into stable.
I'm using electron-builder to make these release packages, and I want each to have its own auto-updates - so when I publish a new next release all the users with it get the update.
I want the applications to be independent - a user can have two channels installed and run both at the same time. They'll have different names and different icons.
I can manually set these up in the branches, but really I want to automate this as much as possible - a publish from the next branch should use the right name, icons, IDs and updater without risk of it going to the wrong channel.
Is there a way to do this with electron or electron-builder?
It's possible with electron-builder. I would have several build configurations and tell electron-builder which to use when building.
For example, create file config/beta.json with the following setup:
{
"appId": "com.company.beta",
"productName": "App Beta",
"directories": {
"buildResources": "build/beta" // directory containing your build-specific assets (e.g., beta icons - icon.icns, icon.ico & background.png)
},
"mac": {
"category": "public.app-category.finance"
},
"win": {
"target": [
"nsis"
]
},
"nsis": {
"perMachine": false
},
"publish": [
{
"provider": "s3",
"bucket": "com-app-beta" // dedicated S3 bucket for each build
}
],
}
And duplicate config/beta.json for next.json and current.json (make sure to edit settings accordingly).
In package.json, add the following build scripts (note --em.name=app-beta to overwrite package.json's "name" value):
{
"scripts": {
"build": "build -owl --x64 --config ./config/current.json -p always --em.name=app",
"build-beta": "build -owl --x64 --config ./config/beta.json -p always --em.name=app-beta",
"build-next": "build -owl --x64 --config ./config/next.json -p always --em.name=app-next"
}
}
Run build script when ready to deploy:
npm run build-beta
Using electron-builder version 20.15.1 and MacOS, #Jon Saw's solution needs a minor change because em option is not valid:
"build-beta": "build -owl --x64 --config ./config/beta.json -p always -c.extraMetadata.name=app-beta"
After digging into VSCode's documentation and playing with the tasks, I'm not sure I understand how tasks are definied.
I'd like to call my grunt tasks from VSCode. The problem is that I'm using a plugin called "load-grunt-config" which makes loading of grunt plugins automatic (no need to type grunt.loadNpmTasks() for each plugin you use), and makes it easy to have each plugin's configuration defined in its own file.
So my gruntfile is almost empty and tasks are defined in a file grunt/aliases.js, and configurations for each plugin is separated in its own file (for eg. grunt/connect.js contains the configurations for the connect plugin).
When typing "run tasks", it shows every task available (including builtin ones, like svgmin, imagemin,...) so it's a total of 30 or 40.
No only it's slow but it also shows tasks I haven't defined and do not directly use.
(type grunt --help to see what it shows).
Is there a way to only show aliases I have defined ?
What's the point of defining the task into the tasks.json file ?
Is there a way to dirrectly run a task without having to list all tasks ? (there are shortcuts for "build" & "test" tasks so you can type "run test" and it will run the test task but is there a way to define new tasks that will appear there too ?)
VSCODE V0.7.0 TASKS SHORTCUTS
Gulp, Grunt and Jake are autodetected only if the corresponding files are present in the root of the opened folder. With the Gulp tasks below, the sub-tasks optimize for serve-build, and inject for serve-dev, run additional sub-task and so forth and so on... This process makes two top-level tasks available for build and dev workflows.
gulpfile.js
//automate build node server start and restart on changes
gulp.task('serve-build', ['optimize'], function () {
serve(false /* is Build */);
});
//automate dev node server start and restart on changes
gulp.task('serve-dev', ['inject'], function () {
serve(true /* is Dev */);
});
The above Gulp tasks will be autodetected by VSCode as will many others.
However, you can manually define two keyboard shortcut tasks for TEST and BUILD in your tasks.json file. Subsequently, you can run TWO top-level tasks with CTRL-SHFT-T and CTRL-SHFT-B respectively.
tasks.json
{
"version": "0.1.0",
"command": "gulp",
"isShellCommand": true,
"args": [
"--no-color"
],
"tasks": [
{
"taskName": "serve-dev",
"isBuildCommand": false,
"isTestCommand": true,
"showOutput": "always",
"args": []
},
{
"taskName": "serve-build",
"isBuildCommand": false,
"isTestCommand": true,
"showOutput": "always",
"args": []
}
Notice the two properties isTestCommand and isBuildCommand. These directly map to CTRL-SHFT-T and CTRL-SHFT-B. So you can map any task to these keyboard shortcuts depending on your needs. If you have more than one task with these properties set to true VSCode will use the first task in the tasks.json file.
With tasks structured and nested properly you should be able to build your workflows with two keyboard shortcuts and avoid having VSCode enumerate your tasks each time via the CTRL+SHFT+P menus.
suppressTaskname
There's a global property suppressTaskName that can be used in the tasks.json. This property controls whether the task name is added as an argument to the command.
I'm not sure if this can help but it's good to know about. It looks like this a way to pass built-in arguments to the task runner. More here Task Appendix. I haven't had a chance to test this to determine how if differs from the traditional "args:[]" property. I know that prior to v0.7.0 there were problems with args that when is was used the arguments and tasknames had to be reversed. See accept args the task name and args had to be reversed
The showOutput property can also be useful with and without the with problemMatcher property for display output in the VSCode Output window.
{
"version": "0.1.0",
"command": "gulp",
"showOutput": "silent"
"isShellCommand": true,
"args": [
"--no-color"
],
"tasks": [
{
"taskName": "gulp-argument",
"suppressTaskName": true,
...
},
{
"taskName": "test",
"args": [],
"isBuildCommand": true,
"showOutput": "always",
"problemMatcher": "$msCompile"
...
},
{
}....
]
}
Following are a couple of other StackOverFlow theads on VSCode tasks:
Define multiple tasks in VSCode
Configure VSCode to execute different task
links