How to access artifacts in Kubeflow runtime? - kubeflow

I would like to access mlpipeline-metrics content from another component.

To fetch metrics from a run:
client = kfp.Client(host=host)
client.get_run(run_id).run.metrics

Related

Domain Mapping to point to a tag inside a service on Cloud Run

right now I'm deploying to cloud run and run
gcloud run deploy myapp --tag pr123 --no-traffic
I can then access the app via
https://pr123---myapp-jo5dg6hkf-ez.a.run.app
Now I would like to have a custom domain mapping going to this tag. I know how to point a custom domain to the service but I don't know how to point it to the tagged version of my service.
Can I add labels to the DomainMapping that would cause the mapping to got this version of my cloud run service? Or is there a routeName, eg. myapp#pr123 that would do the trick there?
In the end I would like to have
https://pr123.dev.mydomain.com
being the endpoint for this service.
With a custom domain, you configure a DNS to point to a service, not a revision/tag of the service. So, you can't by this way.
The solution is to use a load balancer with a serverless NEG. The most important is to define the URL mask that you want to map the tag and service from the URL which is received by the Load Balancer.
I ended up building the loadbalancer with a network endpoint group (as suggested). For further reference, here is my terraform snippet to create it. The part is then the traffic tag you assign to your revision.
resource "google_compute_region_network_endpoint_group" "api_neg" {
name = "api-neg"
network_endpoint_type = "SERVERLESS"
region = "europe-west3"
cloud_run {
service = data.google_cloud_run_service.api_dev.name
url_mask = "<tag>.preview.mydomain.com"
}
}

Is there any APIs from Jenkins that will give the list of API Tokens present in the configure tab?

Can some help me, if anyone knows that, if there any API from Jenkins that will give me the list of api tokens present in the User >> Configure tab?
Thanks in advance !!
You can run the following groovy script.
user = hudson.model.User.get('username')
prop = user.getProperty(jenkins.security.ApiTokenProperty)
tokenList = prop.getTokenStore().getTokenListSortedByName()
tokenList.each() {
println(it.getName())
}
Usually i run this from admin to get stats of each token and who it belongs to.

How to fetch SSM Parameters from two different accounts using AWS CDK

I have a scenario where I'm using CodePipeline to deploy my cdk project from a tools account to several environment accounts.
The way my pipeline is deploying is by running cdk deploy from within a CodeBuild job.
My team has decided to use SSM Parameter Store to store configuration and we ended up with some parameters living in the environment account, for example the VPC_ID (resources/vpc/id) that I can read in deployment time => ssm.StringParameter.valueForStringParameter.
However, other parameters are living in the tools account, such as the Account Ids from my environment accounts (environment/nonprod/account/id) and other Global Config. I'm having trouble fetching those values.
At the moment, the only way I could think of was by using a step to read all those values in a previous step and loaded them into the context values.
Is there a more elegant approach for this problem? I was hoping I could specify in which account to get the SSM values from. Any ideas?
Thank you.
As you already stated there is no native support for that. I am also using CodePipeline in cross-account deployments, so all the automation parameters or product specified parameters are stored in a secured account and CodePipeline deploys the resources using CloudFormation as an action provider.
Cross account resolution of SSM parameters isn't supported, so in the end, I had added an extra step (stage) in my CodePipeline, which is nothing else but a CodeBuild project, which runs a script in a containerized environment and scripts then "syncs" the parameters from the automation account to the destination account.
As part of your pipeline, I would add a preliminary step to execute a Lambda. That Lambda can then execute whatever queries you wish to obtain whatever metadata/config that is required. The output from that Lambda can then be passed in to the CodeBuild step.
e.g. within the Lambda:
export class ConfigFetcher {
codepipeline = new AWS.CodePipeline();
async fetchConfig(event: CodePipelineEvent, context : Context) : Promise<void> {
// Retrieve the Job ID from the Lambda action
const jobId = event['CodePipeline.job'].id;
// now get your config by executing whatever queries you need, even cross-account, via the SDK
// we assume that the answer is in the variable someValue
const params = {
jobId: jobId,
outputVariables: {
MY_CONFIG: someValue,
},
};
// now tell CodePipeline you're done
await this.codepipeline.putJobSuccessResult(params).promise().catch(err => {
console.error('Error reporting build success to CodePipeline: ' + err);
throw err;
});
// make sure you have some sort of catch wrapping the above to post a failure to CodePipeline
// ...
}
}
const configFetcher = new ConfigFetcher();
exports.handler = async function fetchConfigMetadata(event: CodePipelineEvent, context : Context): Promise<void> {
return configFetcher.fetchConfig(event, context);
};
Assuming that you create your pipeline using CDK, then your Lambda step will be created using something like this:
const fetcherAction = new LambdaInvokeAction({
actionName: 'FetchConfigMetadata',
lambda: configFetcher,
variablesNamespace: 'ConfigMetadata',
});
Note the use of variablesNamespace: we need to refer to this later in order to retrieve the values from the Lambda's output and insert them as env variables into the CodeBuild environment.
Now our CodeBuild definition, again assuming we create using CDK:
new CodeBuildAction({
// ...
environmentVariables: {
MY_CONFIG: {
type: BuildEnvironmentVariableType.PLAINTEXT,
value: '#{ConfigMetadata.MY_CONFIG}',
},
},
We can call the variable whatever we want within CodeBuild, but note that ConfigMetadata.MY_CONFIG needs to match the namespace and output value of the Lambda.
You can have your lambda do anything you want to retrieve whatever data it needs - it's just going to need to be given appropriate permissions to reach across into other AWS accounts if required, which you can do using role assumption. Using a Lambda as a pipeline step will be a LOT faster than using a CodeBuild step in the pipeline, plus it's easier to change: if you write your Lambda code in Typescript/JS or Python, you can even use the AWS console to do in-place edits whilst you test that it executes correctly.
AFAIK there is no native way to achieve what you described. If there is way I'd like to know too. I believe you can use the CloudFormation custom resource baked by lambda for this purpose.
You can pass parameters to the lambda request and get information back from the lambda response.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html, https://www.2ndwatch.com/blog/a-step-by-step-guide-on-using-aws-lambda-backed-custom-resources-with-amazon-cfts/ and https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html for more information.
This question is a year old, but a simpler method I found for retrieving parameters from your tools/deployment account is to specify them as env variables in your buildspec file. CodeBuild will always pull these from whatever account your job is running in (which in this question's scenario would be the tools account).
To pull parameters from your target environment accounts, it's best to use the CDK SSM approach suggested by the question author.

Postman: How to make multiple requests at the same time

I want to POST data from Postman Google Chrome extension.
I want to make 10 requests with different data and it should be at the same time.
Is it possible to do such in Postman?
If yes, can anyone explain to me how can this be achieved?
I guess there's no such feature in postman as to run concurrent tests.
If I were you, I would consider Apache jMeter, which is used exactly for such scenarios.
Regarding Postman, the only thing that could more or less meet your needs is - Postman Runner.
There you can specify the details:
number of iterations,
upload CSV file with data for different test runs, etc.
The runs won't be concurrent, only consecutive.
Do consider jMeter (you might like it).
Postman doesn't do that but you can run multiple curl requests asynchronously in Bash:
curl url1 & curl url2 & curl url3 & ...
Remember to add an & after each request which means that request should run as an async job.
Postman however can generate curl snippet for your request: https://learning.getpostman.com/docs/postman/sending_api_requests/generate_code_snippets/
I don't know if this question is still relevant, but there is such possibility in Postman now. They added it a few months ago.
All you need is create simple .js file and run it via node.js. It looks like this:
var path = require('path'),
async = require('async'), //https://www.npmjs.com/package/async
newman = require('newman'),
parametersForTestRun = {
collection: path.join(__dirname, 'postman_collection.json'), // your collection
environment: path.join(__dirname, 'postman_environment.json'), //your env
};
parallelCollectionRun = function(done) {
newman.run(parametersForTestRun, done);
};
// Runs the Postman sample collection thrice, in parallel.
async.parallel([
parallelCollectionRun,
parallelCollectionRun,
parallelCollectionRun
],
function(err, results) {
err && console.error(err);
results.forEach(function(result) {
var failures = result.run.failures;
console.info(failures.length ? JSON.stringify(failures.failures, null, 2) :
`${result.collection.name} ran successfully.`);
});
});
Then just run this .js file ('node fileName.js' in cmd).
More details here
Not sure if people are still looking for simple solutions to this, but you are able to run multiple instances of the "Collection Runner" in Postman. Just create a runner with some requests and click the "Run" button multiple times to bring up multiple instances.
Run all Collection in a folder in parallel:
'use strict';
global.Promise = require('bluebird');
const path = require('path');
const newman = Promise.promisifyAll(require('newman'));
const fs = Promise.promisifyAll(require('fs'));
const environment = 'postman_environment.json';
const FOLDER = path.join(__dirname, 'Collections_Folder');
let files = fs.readdirSync(FOLDER);
files = files.map(file=> path.join(FOLDER, file))
console.log(files);
Promise.map(files, file => {
return newman.runAsync({
collection: file, // your collection
environment: path.join(__dirname, environment), //your env
reporters: ['cli']
});
}, {
concurrency: 2
});
In postman's collection runner you can't make simultaneous asynchronous requests, so instead use Apache JMeter instead. It allows you to add multiple threads and add synchronizing timer to it
If you are only doing GET requests and you need another simple solution from within your Chrome browser, just install the "Open Multiple URLs" extension:
https://chrome.google.com/webstore/detail/open-multiple-urls/oifijhaokejakekmnjmphonojcfkpbbh?hl=en
I've just ran 1500 url's at once, did lag google a bit but it works.
The Runner option is now on the lower right side of the panel
If you need to generate more consecutive requests (instead of quick clicking SEND button). You can use Runner. Please note it is not true "parallel request" generator.
File->New Runner Tab
Now you can "drag and drop" your requests from Collection and than keep checked only request you would like to generate by a Runner setting 10 iterations (to generate 10 requests ) and delay for example to 0 (to make it as fast as possible).
Easiest way is to get => Google Chrome "TALEND API TESTER"
Go to help + type in Create Scenario
...or just go to this link => https://help.talend.com/r/en-US/Cloud/api-tester-user-guide/creating-scenario
I was able to send several POST API calls simultaneously.
You can use Fiddler with started traffic capture to record manual queries from Postman, then select them in Fiddler's sessions list as much as you want and replay (press R key) - they would run in parallel.
https://docs.telerik.com/fiddler/generate-traffic/tasks/resendrequest
You can run multiple instances of postman Runner and run the same collection with different data files in each instance.
Open multiple postman. It replicates it and run concurrently.

retrieving post data from web hook in jenkins

I am using gitlab and I want to fire a system hook whenever a project is created. I have added the hook with the following jenkins api call(I am using a jenkins plugin that is why the api looks different).
http://myip:8081/buildByToken/buildWithParameters?job=testHook&token=hook
this is starting the jenkins job but I am unable to get the post data sent by the hook in my jenkins job.
the following is an example of what gitlab sends as post data with this hook.
{
"created_at": "2012-07-21T07:30:54Z",
"event_name": "project_create",
"name": "StoreCloud",
"owner_email": "johnsmith#gmail.com",
"owner_name": "John Smith",
"path": "stormcloud",
"path_with_namespace": "jsmith/stormcloud",
"project_id": 74,
"project_visibility": "private",
}
is there a way to retrieve post data in jenkins that is sent with the webook?
There is a plugin specific for Jenkins and Gitlab integration.
https://github.com/elvanja/jenkins-gitlab-hook-plugin#build-now-hook
By using http://your-jenkins-server.com/gitlab/build_now, you can have access to all payload variables, like the examples in documentation. Your build needs to be parameterized, and all variables you want to have access need to be declared. Then, you will have a env variable available, like ${USER_NAME}
However, if you want to use /gitlab/notify_commit, which has a lot of more cool possibilities, payload data will not work, because of the gap between the trigger and the build (i am talking about the poll process).
I believe that your /buildByToken/buildWithParameters, since its a build_now like, will have the payload. Using GitLabHookPlugin, you will have the parameters for sure.
Marco

Resources