I want to use aws_apigateway and use a swagger file to define the api, how do I code this using AWS CDK either in python or TypeScript?
There is a workaround here, but so far (22/6/2022) is still in the CDK roadmap (issue ref)
The workaround involves some manual steps and initial swagger extraction from CDK init, then it can be fed in somehow.
Related
In Jenkins, we want to get the Pipeline stages information through API, e.g. a stage is success or fail. From another answer it seems we can achieve it through Pipeline REST API Plugin.
My question is:
Can Jenkinsapi and Python-Jenkins achieve the same thing? It seems they're designed for bare metal Jenkins, instead of the Pipeline plugin, right ? If that's the case, do we have similar Python library for Pipeline plugin?
Thanks!
You have all the info to answer your own question in the documentation you linked. Just to put it together:
Jenkins has a REST API
The pipeline REST API plugin injects new endpoints in the Jenkins REST API to give you access to several pipeline info
Python-Jenkins and jenkinsapi you mentioned above are both wrappers around the Jenkins REST API to help you develop scripts/apps targeting the Jenkins API in Python. Since those modules/libs are based most probably based on the core API specification, they most probably don't provide specific methods to target the pipeline endpoints (but you can probably extend that quite easily).
If you want to stay in Jenkinsapi, get_data() defined in JenkinsBase class could be used for querying Pipeline REST API directly. Not very sure if it's recommended or not.
Following codes works for me.
from jenkinsapi.jenkins import Jenkins
import requests
requests.packages.urllib3.disable_warnings()
# GET /job/:job-name/:run-id/wfapi/describe
url = 'https://localhost/job/anyjob/10/wfapi/describe'
jenkins = Jenkins(
'https://localhost',
username='username',
password='api_token',
ssl_verify=False,
lazy=True)
print(jenkins.get_data(url))
Is it possible to upload a Kubeflow Pipeline using an API call using the Kubeflow Pipelines Python SDK?
There is the following specs for making API calls: https://www.kubeflow.org/docs/pipelines/reference/api/kubeflow-pipeline-api-spec/
However, when I try uploading a pipeline using the route "/apis/v1beta1/pipelines/upload" in PostMan, I get the following error:
There was a problem with your request. Error code 13
I am sure I need to add some Authorization headers, but before I go that hacky route, I wonder if there are plans to add this functionality to the Python SDK. Please let me know if you guys have an idea, or a better way to set up the request for uploading pipelines outside the Kubeflow Pipelines UI!
By now, it's possible indeed to upload a pipeline with the kfp python sdk:
import kfp
client = kfp.Client()
client.upload_pipeline('path/to/pipeline.yaml', pipeline_name=name)
See the docs for a full API reference.
I'd like to use some configs for a library that's used both on Dataflow and in a normal environment.
Is there a way for the code to check it's running on Dataflow? I couldn't see an environment variable, for example.
Quasi-follow-up to Google Dataflow non-python dependencies - separate setup.py?
One option is to use PipelineOptions, which contains the pipeline runner information. As mentioned in the beam documentation: "When you run the pipeline on a runner of your choice, a copy of the PipelineOptions will be available to your code. For example, you can read PipelineOptions from a DoFn’s Context."
More about PipelineOptions: https://beam.apache.org/documentation/programming-guide/#configuring-pipeline-options
This is not a good answer, but it may be the best we can do at the moment:
if 'harness' in os.environ.get('HOSTNAME', ''):
Is there a way that I can create a new task in Jira by writing code in Jenkinsfile, I found the plugin to update the task but didn't got anything to create a new task.
You could use curl to create an issue via the JIRA REST api.
Without knowing what you've tried and what errors you've received (if any), it's kind of difficult to be more specific.
Got a plugin which actually integrates Jira with Jenkins when you are using jenkinsfile. Jira-Pipeline plugin
Usage and documentation is described in the plugin itself.
So we have Swagger UI and YAML file manually generated by a developer. The plan is to use Jenkins to validate our API endpoints (request and response schemas) using the Swagger schema. Is there a way to do that?
Please check Sagger Diff. This CLI tool shows breaking changes between 2 different swagger json files
http://swagger.io/using-swagger-to-detect-breaking-api-changes/