I am using buildbot for building and testing chromium source. We have a local repo maintained for chromium source. We are using Gerrit for maintaining the repository. The buildbot gets successfully triggered when some change is checked in.
I am facing problem with Nightly builds. I want the buildbot to get the source and build, even when there is no change. But, it fails with the following error:
===Running git checkout --force origin/ ===
error: pathspec 'origin/' did not match any file(s) know to git.
===Failed in 0.0. mins===
Can somebody tell me how to get this working?
My master.cfg is as below:
from master import master_utils
from master import slaves_list
from buildbot.schedulers import timed
from buildbot.changes import filter
import config
import master_site_config
ActiveMaster = master_site_config.Chromium
c = BuildmasterConfig = {}
c['change_source'] = []
c['schedulers'] = []
c['status'] = [] pendingRequests = {}
c['builders'] = []
import master_source_cfg
import master_full_cfg
master_source_cfg.Update(config, ActiveMaster, c)
master_full_cfg.Update(config, ActiveMaster, c)
c['logCompressionLimit'] = False
c['projectName'] = ActiveMaster.project_name
c['projectURL'] = config.Master.project_url
c['buildbotURL'] = 'build.chromium.org/p/chromium/';
slaves = slaves_list.SlavesList('slaves.cfg', 'Chromium')
# Trying to find the location of adding factories to the builder
from twisted.python import log
for builder in c['builders']:
log.msg('My BUILDER',builder)
log.msg(dir(builder['factory']))
builder['slavenames'] = slaves.GetSlavesName(builder=builder['name'])
c['slaves'] = master_utils.AutoSetupSlaves(
c['builders'],
config.Master.GetBotPassword(),
missing_recipients=['buildbot#chromium-build-health.appspotmail.com'])
master_utils.VerifySetup(c, slaves)
master_utils.AutoSetupMaster(c, ActiveMaster, enable_http_status_push=ActiveMaster.is_production_host)
c['buildHorizon'] = 3000
c['logHorizon'] = 3000
c['eventHorizon'] = 200
I had missed to add the last part of the master.cfg which contains the configuration for nightly:
c['schedulers'].append(
timed.Nightly(name='nightly',
branch='src',
change_filter=filter.ChangeFilter(project_re='pj/Sample/chromium.*', branch=['buildbot-testing1']),
builderNames=['Linux x64'],
hour=8,
minute=15))
With the above configuration, the nightly gets triggered at the correct time.
But, it is not checking out the 'buildbot-testing1' branch as needed.
Instead, it exits with the following error:
===Running git checkout --force origin/ ===
error: pathspec 'origin/' did not match any file(s) know to git.
===Failed in 0.0. mins===
Related
I'm trying to deploy a NixOS VM while storing its configuration on a private GitLab repository.
My configuration.nix looks like this (simplified to only include the relevant bits):
{ pkgs, ... }:
let
repo = pkgs.fetchFromGitLab { owner = "hectorj"; repo = "nix-fleet"; };
in {
imports = [
./hardware-configuration.nix
"${repo}/my-server-name/host.nix"
];
}
but it is giving me this error:
error: infinite recursion encountered
at /nix/var/nix/profiles/per-user/root/channels/nixos/lib/modules.nix:496:28:
495| builtins.addErrorContext (context name)
496| (args.${name} or config._module.args.${name})
| ^
497| ) (lib.functionArgs f);
I do not understand where the recursion is happening.
It doesn't seem like its even fetching the repo, as I can put any non-existing names in the args and get the same error.
I saw https://nixos.org/guides/installing-nixos-on-a-raspberry-pi.html doing something similar without issue:
imports = ["${fetchTarball "https://github.com/NixOS/nixos-hardware/archive/936e4649098d6a5e0762058cb7687be1b2d90550.tar.gz" }/raspberry-pi/4"];
And I can use that line on my VM and it will build fine.
What am I missing?
The recursion is as follows
Compute the configuration
Compute the config fixpoint of all modules
Find all modules
Compute "${repo}/my-server-name/host.nix"
Compute repo (pkgs.fetch...)
Compute pkgs
Compute config._module.args.pkgs (Nixpkgs can be configured by NixOS options)
Compute the configuration (= 1)
You can break the recursion at 6 by using builtins.fetchTarball instead.
Alternatively, you can break it around 7, by using a different "pkgs".
If you're using configuration.nix as part of a larger expression, you may be able to pass an invoked Nixpkgs to NixOS via specialArgs.pkgs2 = import nixpkgs { ... }. This creates a pkgs2 module argument that can't be configured by NixOS itself.
Otherwise, you could define pkgs2 in a let binding.
{ pkgs, ... }:
let
# pkgs2: An independent Nixpkgs that can be constructed before the NixOS
# imports are resolved.
pkgs2 = import <nixpkgs> {};
repo = pkgs2.fetchFromGitLab { owner = "hectorj"; repo = "nix-fleet"; };
in {
imports = [
./hardware-configuration.nix
"${repo}/my-server-name/host.nix"
];
}
I'm trying to setup a simple KubeFlow pipeline, and I'm having trouble packaging up dependencies in a way that works for KubeFlow.
The code simply downloads a config file and parses it, then passes back the parsed configuration.
However, in order to parse the config file, it needs to have access to another internal python package.
I have a .tar.gz archive of the package hosted on a bucket in the same project, and added the URL of the package as a dependency, but I get an error message saying tarfile.ReadError: not a gzip file.
I know the file is good, so it's some intermediate issue with hosting on a bucket or the way kubeflow installs dependencies.
Here is a minimal example:
from kfp import compiler
from kfp import dsl
from kfp.components import func_to_container_op
from google.protobuf import text_format
from google.cloud import storage
import training_reader
def get_training_config(working_bucket: str,
working_directoy: str,
config_file: str) -> training_reader.TrainEvalPipelineConfig:
download_file(working_bucket, os.path.join(working_directoy, config_file), "ssd.config")
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
config_op_packages = ["https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz",
"google-cloud-storage",
"protobuf"
]
training_config_op = func_to_container_op(get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages)
def output_config(config: training_reader.TrainEvalPipelineConfig) -> None:
print(config)
output_config_op = func_to_container_op(output_config)
#dsl.pipeline(
name='Post Training Processing',
description='Building the post-processing pipeline'
)
def ssd_postprocessing_pipeline(
working_bucket: str,
working_directory: str,
config_file:str):
config = training_config_op(working_bucket, working_directory, config_file)
output_config_op(config.output)
pipeline_name = ssd_postprocessing_pipeline.__name__ + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
The https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz IRL requires authentication. Try to download it in Incognito mode and you'll see the login page instead of file.
Changing the URL to https://storage.googleapis.com/my_bucket/packages/training-reader-0.1.tar.gz works for public objects, but your object is not public.
The only thing you can do (if you cannot make the package public) is to use google.cloud.storage library or gsutil program to download the file from the bucket and then manually install it suing subprocess.run([sys.executable, '-m', 'pip', 'install', ...])
Where are you downloading the data from?
What's the purpose of
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
Why not just do the following:
def get_training_config(
working_bucket: str,
working_directory: str,
config_file: str,
output_config_path: OutputFile('TrainEvalPipelineConfig'),
):
download_file(working_bucket, os.path.join(working_directoy, config_file), output_config_path)
the way kubeflow installs dependencies.
Export your component to loadable component.yaml and you'll see how KFP Lighweight components install dependencies:
training_config_op = func_to_container_op(
get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages,
output_component_file='component.yaml',
)
P.S. Some small pieces of info:
#dsl.pipeline(
Not required unless you want to use the dsl-compile command-line program
pipeline_name = ssd_postprocessing_pipeline.name + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
Did you know that you can just kfp.Client(host=...).create_run_from_pipeline_func(ssd_postprocessing_pipeline, arguments={}) to run the pipeline right away?
I am trying to get my Faster R-CNN model into an Container Instance on ACI. For that I need my docker image to posses python version 3.5.*. I specify that in my conda yaml file, but every time I spin an instance up and docker run -it *** /bin/bash into it I see that it only has Python 3.6.7.
https://user-images.githubusercontent.com/21140767/50680590-82b20b80-1008-11e9-9bfe-4a0e71084ce0.png
How can I get my Docker image to have Python version 3.5.*? I already tried conda installing Python version 3.5.2, but that didn't work as eventually it didn't posses 3.5.2, but only 3.6.7. (dfimage lets you see the dockerfile from which the image was created, https://hub.docker.com/r/chenzj/dfimage/).
https://user-images.githubusercontent.com/21140767/50680673-d6245980-1008-11e9-9d48-71a7c150d925.png
My yaml:
name: project_environment
dependencies:
- python=3.5.2
- pip:
- matplotlib
- opencv-python==3.4.3.18
- azureml-core==1.0.6
- numpy
- cntk
- cython
channels:
- anaconda
Notebook cell:
from azureml.core.conda_dependencies import CondaDependencies
svmandss = CondaDependencies.create(python_version="3.5.2", pip_packages=[
"matplotlib",
"opencv-python==3.4.3.18",
"azureml-core",
"numpy",
"cntk",
"cython"], )
svmandss.add_channel('anaconda')
with open("fasterrcnn.yml","w") as f:
f.write(svmandss.serialize_to_string())
Another notebook cell with ContainerImage specifications.
image_config = ContainerImage.image_configuration(execution_script="score_fasterrcnn.py",runtime="python",conda_file="./fasterrcnn.yml",dependencies=listdir("utils"),docker_file="./Dockerfile")
service = Webservice.deploy_from_model(workspace=ws,
name='faster-rcnn',
deployment_config=aciconfig,
models=[Model(workspace=ws, name='Faster-RCNN')],
image_config=image_config)
service.wait_for_deployment(show_output=True)
Note
For better readability see my GitHub issue: (https://github.com/Azure/MachineLearningNotebooks/issues/163).
Currently, the version of Python is fixed to what's in Azure ML's base image, when deploying the web service. We're investigating removing this limitation in future.
Since this is one of the top Google answers when searching for "azureml python version" I'm posting the answer here. The documentation is not very clear when it comes to this, but the following will work:
from azureml.core import Workspace
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
ws = Workspace.from_config()
# This is the important part
conda_dep = CondaDependencies(conda_dependencies_file_path="pipeline/environment.yml")
aml_run_config = RunConfiguration(conda_dependencies=conda_dep)
# Define compute target - must be preconfigured in th workspace
compute_target = ws.compute_targets['my-azureml-target']
aml_run_config.target = compute_target
from azureml.pipeline.steps import PythonScriptStep
script_source_dir = "./pipeline"
step_1_script = "test.py"
step_1 = PythonScriptStep(
script_name=step_1_script,
source_directory=script_source_dir,
compute_target=compute_target,
runconfig=aml_run_config,
allow_reuse=True
)
from azureml.pipeline.core import Pipeline
# Build the pipeline
pipeline1 = Pipeline(workspace=ws, steps=[step_1])
from azureml.core import Experiment
# Submit the pipeline to be run
pipeline_run1 = Experiment(ws, 'Test-pipeline').submit(pipeline1)
pipeline_run1.wait_for_completion(show_output=True)
This assumes the following directory structure:
root/
create_pipeline.py
pipeline/
test.py
environment.yml
where create_pipeline.py is the file above, test.py is the script you would like to run and environment.yml is the conda environment file - including the python version.
I was able to change the Python version by registering the environment in Azure ML Workspace:
from azureml.core.environment import Environment, Workspace
environment = Environment.from_conda_specification(name='myenv', file_path='environment.yml')
environment.python.user_managed_dependencies = False
workspace = Workspace.from_config()
environment = environment.register(workspace=workspace)
env_build = environment.build(workspace=workspace)
Then, configure the endpoint for publishing as follows:
from azureml.core.model import InferenceConfig
environment = Environment.get(workspace=workspace, name='myenv')
inference_config = InferenceConfig(
entry_script='inference.py',
source_directory='.',
environment=environment
)
This is using Azure ML SDK 1.29.0. Perhaps this has already been fixed and the original method works as well, but I didn't test that.
EDIT:
This is no longer an issue for me. I found another way to get my code to work with python version 3.6.7.
This is however still an issue if you ask me. If in the future I do need python version 3.5 then there will not be a solution as of now.
You can still post an answer if you would like.
Deploying my NixOps machines takes alot of time, as packages need to build. I want to do the building regularly on my trusted private Hydra instance.
My current approach involves this release.nix file, but it doesn't work out so well.
{ nixpkgs ? <nixpkgs>, onlySystem ? true, extraModules ? [] }:
let
nixos = import "${nixpkgs}/nixos";
buildEnv = conf: (nixos {
configuration = conf;
});
buildTarget = m: let build = buildEnv (buildConf m); in
if onlySystem then build.system else build.vm;
buildConf = module: { ... }:
{
imports = [ module ] ++ extraModules;
};
in
{
machine1 = buildTarget ./machine1/configuration.nix;
machine2 = buildTarget ./machine2/configuration.nix
machine3 = buildTarget ./machine3/configuration.nix
machine4 = buildTarget ./machine4/configuration.nix
}
I don't really understand this code, as I copied it from here.
This builds fine if I run nix-build release.nix locally, but on hydra I never get a full build. Sometimes builds don't dequeue (they just don't get build), sometimes they fail with various error messages. As nothing of the hydra problems are reproducible (beside the fact, that I never get a full build), I wonder what the best practice for building a NixOps deployment is.
Please note, that I have unfree packages in my deployment. The option nixpkgs.config.allowUnfree = true; is set on the hydra server.
This is not a question about my hydra failures, but about what would be a good way to build a NixOps deployment with Hydra CI.
As far as I know, there is no way to make this super easy. Your code looks ok, but my method is a slightly different. I only build the toplevel attribute and I construct the NixOS configuration differently.
I build NixOS 'installations' from inside Nix using something like:
let
modules = [ ./configuration.nix ];
nixosSystem = import (pkgs.path + "/nixos/lib/eval-config.nix") {
inherit (pkgs) system;
inherit modules;
};
in
nixosSystem.config.system.build.toplevel
What:
With jenkins I want to process periodically only changed files from SVN and commit the output of the processing back to SVN.
Why:
We are committing binary files into SVN (we are working with Oracle Forms and are committing fmb-Files). I created a script which exports the fmb's to xml (with the original Fmb2XML tool from Oracle) and than I convert the XML to plain source which we also want to commit. This allows us greping, viewing the changes, ....
Problem:
At the moment I am only able to checkout everything, convert the whole directory and committing the whole directory back to SVN. But since all plain text files are newly generated they appear changed in SVN. I want to commit only the changed ones.
Can anyone help me with this?
I installed the Groovy plugin, configured the Groovy language and created a script which I execute as "system Groovy script". The scripts looks like:
import java.lang.ProcessBuilder.Redirect
import hudson.model.*
import hudson.util.*
import hudson.scm.*
import hudson.scm.SubversionChangeLogSet.LogEntry
// uncomment one of the following def build = ... lines
// work with current build
def build = Thread.currentThread()?.executable
// for testing, use last build or specific build number
//def item = hudson.model.Hudson.instance.getItem("Update_SRC_Branch")
//def build = item.getLastBuild()
//def build = item.getBuildByNumber(35)
// get ChangesSets with all changed items
def changeSet= build.getChangeSet()
List<LogEntry> items = changeSet.getItems()
def affectedFiles = items.collect { it.paths }
// get filtered file names (only fmb) without path
def fileNames = affectedFiles.flatten().findResults {
if (it.path.substring(it.path.lastIndexOf(".") + 1) != "fmb") return null
it.path.substring(it.path.lastIndexOf("/") + 1)
}.sort().unique()
// setup log files
def stdOutFile = "${build.rootDir}\\stdout.txt"
def stdErrFile = "${build.rootDir}\\stderr.txt"
// now execute the external transforming
fileNames.each {
def params = [...]
def processBuilder = new ProcessBuilder(params)
// redirect stdout and stderr to log files
processBuilder.redirectOutput(new File(stdOutFile))
processBuilder.redirectError(new File(stdErrFile))
def process = processBuilder.start()
process.waitFor()
// print log files
println new File(stdOutFile).readLines()
System.err.println new File(stdErrFile).readLines()
}
Afterwards I use command line with "svn commit" to commit the updated files.
Preliminary note: getting files from repo in SVN-jargon is "checkout", saving to repo - "commit". Don't mix CVS and SVN terms, it can lead to misinterpretation
In order to get list of changed files in revision (or revset) you can use
Easy way - svn log with options -q -v. For single revision you also add -c REVNO, for revision range: -r REVSTART:REVEND. Probably additional --xml will produce more suitable output, than plain-text
You must to post-process output of log in order to get pure list, because: log contain some useless for you data, in case of log for range you can have the same file included in more than one revision
z:\>svn log -q -v -r 1190 https://subversion.assembla.com/svn/customlocations-greylink/
------------------------------------------------------------------------
r1190 | lazybadger | 2012-09-20 13:19:45 +0600 (Чт, 20 сен 2012)
Changed paths:
M /trunk/Abrikos.ini
M /trunk/ER-Telecom.ini
M /trunk/GorNet.ini
M /trunk/KrosLine.ini
M /trunk/Rostelecom.ini
M /trunk/Vladlink.ini
------------------------------------------------------------------------
example of single revision: you have to log | grep trunk | sort -u, add repo-base to filenames
Harder way: with additional SCM (namely - Mercurial) and hgsubversion you'll get slightly more (maybe) log with hg log --template "{files}\n" - only slightly because you'll get only filelist, but filesets in different revisions are newline-separated, filenames inside revision are space-separated