I have a project which consists of a CLI and an api. Since the cli has different users than the API, both exist as separate git repositories.
Frequently, developing on the CLI requires to edit the API on the same time. SO I would like to ease the dev workflow as much as possible, while keeping the cli as simple to use as possible.
Ideally, I would like to do this:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
api= {editable = true,path = "./../api"}
[packages]
api= {git = "<some git>s.git", ref = "master", editable = true}
[requires]
python_version = "3.7"
[pipenv]
allow_prereleases = true
Unfortunately, the lxpy version in packages always seems to "win" - is there a way to prevent that?
Related
I have a composer environment that is deployed on a GKE cluster, my wish is to be able to retrieve the info on this cluster via operators for example, without hard coding it, or manually putting it in environment variables.
Relevant info I wish to get for now :
COMPOSER_SERVICE_ACCOUNT = "<acc_name>#<project_id>.iam.gserviceaccount.com"
COMPOSER_BUCKET = "<bucket_name>"
COMPOSER_PROJECT = "<project_id_where_composer_is_deployed>"
COMPOSER_PYTHON_VERSION = "3.8.12"
COMPOSER_VERSION = "<relevant_v>"
COMPOSER_UI_URL = "<...>"
AIRFLOW_VERSION = "2.3.4"
...
My intuition is to use gcloud via a BashOperator, but I was hoping there was a library capable of performing this task better.
You can use the built in CloudComposerGetEnvironmentOperator operator :
get_env = CloudComposerGetEnvironmentOperator(
task_id="get_env",
project_id='project',
region='europe-west1',
environment_id='composer-env-name',
)
This operator displays all the environment information, it's equivalent to :
gcloud composer environments describe composer-env-name \
--location europe-west1
You can access to the result Dict with xcom if needed.
If you want to no hard coding the arguments like project id and Composer environment name, you can retrieve them with predefined Composer env vars, example :
PROJECT_ID = os.getenv("GCP_PROJECT")
COMPOSER_ENV_NAME = os.getenv("COMPOSER_ENVIRONMENT")
get_env = CloudComposerGetEnvironmentOperator(
task_id="get_env",
project_id=PROJECT_ID,
region='europe-west1',
environment_id=COMPOSER_ENV_NAME,
)
In Bazel, how do I fetch a remote file as a build rule not as a WORKSPACE rule?
I want to use a build rule because WORKSPACE rules are not loaded for transitively.
e.g. this fails
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
http_file(
name = "foo",
urls = [ "https://example.com" ],
sha256 = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
executable = True,
)
Error in repository_rule: 'repository rule http_file' can only be called during workspace loading
If you really want to do that, you have to implement your own rule, a naïve trivial example relying on curl to fetch could be:
def _impl(ctx):
args = ctx.actions.args()
args.add("-o", ctx.outputs.out)
args.add(ctx.attr.url)
ctx.actions.run(
outputs = [ctx.outputs.out],
executable = "curl",
arguments = [args],
)
get_stuff = rule(
_impl,
attrs = {
"url": attr.string(
mandatory = True,
),
},
outputs = {"out": "%{name}.out"},
)
But (and esp. in such a trivial) for, it comes with problems. Apart from, do you want to step out of sandbox during the build? And do you want to talk to someone across the network during the build (out of the sandbox)? Bypassing repository_cache, and possibly getting remote_cache involved (networked caching of networked fetching). Specifically in this example, if content of the file pointed to by url changes... build has no idea and only fetches it when it either hasn't done so or the url itself has changed. I.e. the implementation would need to be more robust (mimic that of http_file for instance).
But it actually sounds like you're trying to address a different problem (transitive external dependencies, for which there could be another solution). One trick used for that is to define a macro (in your first level dependency to load define the next hop) and after declaring that first step as an external dependency in your parent project, load the that macro and use it from parent project WORKSPACE. This too has a price though, namely the first level dependency has to always be present (fetched or already cached), even if build target asked for does not actually need it (as that load and macro call will always pull it in).
I want to download and build ruby within a workspace. I've been trying to implement this by mimicking rules_go. I have that part working. The issue I'm having is it rebuilds the openssl and ruby artifacts each time ruby_download_sdk is invoked. In the code below the download artifacts are cached but the builds of openssl and ruby are always executed.
def ruby_download_sdk(name, version = None):
# TODO detect os and arch
os, arch = "osx", "x86_64"
_ruby_download_sdk(
name = name,
version = version,
)
_register_toolchains(name, os, arch)
def _ruby_download_sdk_impl(repository_ctx):
# TODO detect platform
platform = ("osx", "x86_64")
_sdk_build_file(repository_ctx, platform)
_remote_sdk(repository_ctx)
_ruby_download_sdk = repository_rule(
_ruby_download_sdk_impl,
attrs = {
"version": attr.string(),
},
)
def _remote_sdk(repository_ctx):
_download_openssl(repository_ctx, version = "1.1.1c")
_download_ruby(repository_ctx, version = "2.6.3")
openssl_path, ruby_path = "openssl/build", ""
_build(repository_ctx, "openssl", openssl_path, ruby_path)
_build(repository_ctx, "ruby", openssl_path, ruby_path)
def _build(repository_ctx, name, openssl_path, ruby_path):
script_name = "build-{}.sh".format(name)
template_name = "build-{}.template".format(name)
repository_ctx.template(
script_name,
Label("#rules_ruby//ruby/private:{}".format(template_name)),
substitutions = {
"{ssl_build}": openssl_path,
"{ruby_build}": ruby_path,
}
)
repository_ctx.report_progress("Building {}".format(name))
res = repository_ctx.execute(["./" + script_name], timeout=20*60)
if res.return_code != 0:
print("res %s" % res.return_code)
print(" -stdout: %s" % res.stdout)
print(" -stderr: %s" % res.stderr)
Any advice on how I can make bazel aware such that it doesn't rebuild these build artifacts every time?
Problem is, that bazel isn't really building your ruby and openssl. When it prepares your build tree and runs the repository rule, it just executes a shell script as instructed, which apparently happens to build, but that fact is essentially opaque to bazel (and it also happens before bazel itself would even build).
There might be other, but I see the following as your options from top of my head:
Pre-build your ruby environment and its results as external dependency. The obvious downside (which may or may not be quite a lot of pain) being you need to do so for all platforms you need to supports (incl. making sure correct detection and corresponding download). The upside being you really only build once (per platform) and also have control over tooling used across all hosts. This would likely be my primary choice.
Build ssl and ruby as any other C sources making them just another bazel target. This however means you'd need to bazelify their builds (describe and maintain bazel build of otherwise bazel unaware project).
You can continue further along the path you've started and just (sort of) leave bazel out of it. I.e. for these builds extend the magic and in the build scripts used for instance using deterministic location and perhaps manifest files of what is around (also to make corruption less likely) make it possible to determine that the build has indeed already taken place and you can just collect its previous results.
I'm using cakebuid as my build tool for TFS 2017 Update 2 and trying to implement the traditional Git Flow. In this flow, there are a few automatic merges that happen every time changes get into master, those changes need to be propagated to the develop branch.
Using cake I can run a PowerShell script or use LibGit2Sharp to accomplish the automatic merge for the best case scenarios. But, what about when the merge has conflicts? Do I need to fail the whole build because the merge process fail?
We have certainly something to deal with merges in TFS, this is no other than the Pull Request.
Question
Is there any tool or add-in for cake that allows me to create
Pull Request during the execution of a build step?
I don't think there is any add-in available for you to create a pull request but since you can run PowerShell, you can easily use the TFS rest api to create pull request
https://www.visualstudio.com/en-us/docs/integrate/api/git/pull-requests/pull-requests
It was recently announced that there is a VSTS CLI:
https://blogs.msdn.microsoft.com/devops/2017/11/15/introducing-the-new-cli-for-vsts/
Which includes the ability to create a pull request:
https://learn.microsoft.com/en-gb/cli/vsts/get-started?view=vsts-cli-latest#create-a-pull-request
I don't think it would be particularly hard to create a Cake Addin which wraps this tool, and exposes the functionality through a set of addin's.
In the mean time, you could shell out to this tool using the Process aliases that currently exist in Cake.
Finally, I spend sometimes creating the package:
Nuget: https://www.nuget.org/packages/Cake.Tfs.AutoMerge
GitHub: https://github.com/mabreuortega/Cake.Tfs
The way you can use it now is similar to this:
Task("Merge")
.Does(c => {
CreateAutoMergePullRequest(
new AutoMergeSettings
{
// Required
CollectionUri = "https://{instance}/{collection-name}",
ProjectName = "project-name",
RepositoryName = "repository-name",
SourceBranch = "refs/heads/release/1.0.0",
TargetBranch = "refs/heads/develop",
Title = "[Auto Merge from Release]",
// Optional
Description = "Brief description of the changes about to get merge",
// Control
DeleteSourceBranch = false,
SquashMerge = false,
OverridePolicies = true,
AutoComplete = true,
AutoApprove = true
});
});
Any suggestions, please use the GitHub issue tracker.
Hope this help!
I am developing a Firefox add on, I pretend my add on to be able to run in both desktop and mobile devices. I think more or less everything is compatible with both environments, however there are pieces of code that I would like to run depending on whether the current device is mobile or desktop, so the question is if there is something like system.isMobile() that can be used in the following way:
var system = require("sdk/system");
if(system.isMobile())
console.log("firefox for android");
else
console.log("normal firefox");
As you can find at the system api documentation, there is a variable telling the operating system.
code:
var system = require("sdk/system");
console.log("system platform = " + system.platform);
output:
system platform = linux
disclaimer: I didn't tested on mobile environment.
You can use the System High-Level API. system.platform will contain the information on the type of OS the user is running.
You can use it like so:
var system = require("sdk/system");
var platform = system.platform // Will contain platform, i.e. Windows, Linux, etc.
// You can log this data to the console
console.log("System Platform = " + platform);
When you call system.platform, it will usually return one of the values listed on this page, converted to lowercase