How can I dump bitbucket environment variables into a file? - bitbucket

I have a service that will be migrated to Github, But before moving forward I need to dump the bitbucket repo environment variables into a local file. Is that is possible?

Yes, you can do it by using Bitbucket API. You can list variables and get each variable via cURL, bash script or some languages like Python, Java, Node.js or PHP.
API documentation references :
list : https://developer.atlassian.com/cloud/bitbucket/rest/api-group-pipelines/#api-repositories-workspace-repo-slug-pipelines-config-variables-get
get : https://developer.atlassian.com/cloud/bitbucket/rest/api-group-pipelines/#api-repositories-workspace-repo-slug-pipelines-config-variables-variable-uuid-get

Related

Using environment variables in Github, NOT in actions

I am working on setting up CI/CD for my organization. I am trying to build an automated system that will deploy our changes to our Shopify theme when we push to a branch.
I have a config.yml file that contains information like password, themeID and so on. I am trying to find a way to hide these variables in our repo. Github secrets have not seemed to work, as I can't figure out how to pass those secrets from my workflow yaml file to my root level config.yml file. Any help would be greatly appreciated!

Are notebooks accessible on the spark as a service file system?

I would like to investigate if it is possible to use the git command line client using a %%sh cell so that I can work directly with project resources such as scripts and notebooks using a git client. E.g.
%%sh
git clone ... myproj
Are the dsx notebooks stored on the spark as a service file system? If so, what folder are they stored in?
The notebooks are managed and are stored seperately and .ipynb are not exposed directly because DSX need ability organize this notebooks in project and collaborative enivornment.
You can certainly use
%%sh
git clone https://github.com/charles2588/bluemixsparknotebooks
Since the .ipynb files are not exposed, you cannot push them from here.
The alternative would be to use github integration and push files as explained in this thread:-
http://datascience.ibm.com/blog/github-integration-available-2/
Thanks,
Charles.

OpenShift S2I build strategy from multiple data sources

A web application typically consists of code, config and data. Code can often be made open source on GitHub. But per-instance config and data may contain secretes therefore are inappropriate be saved in GH. Data can be imported to a persistent storage so disregard for now.
Assuming the configs are file based and are saved in another private secured SVN repo, in order to deploy the web app to OpenShift and implement CI, I need to merge config files with code prior to running build scripts. In addition, the build strategy should support GH webhooks for automated build.
My questions are, to be more specific:
Does OS BuildConfig support multiple data sources, especially from svn?
If not, how to deploy such web app to OS?
The solution I came up with so far:
Instead of relying on OS for CI, use Jenkin instead.
Merge config files with code using Jenkins.
Instead of using Git source type in BuildConfig, use binary source instead
Let jenkins run
oc start-build --from-dir=<directory>
where <directory> contains merged code/config

Gitlab and HTML documentation

My development environment consists of Git repository, GitLab repository manager and Jenkins. During build process a documentation is generated with Doxygen in HTML format.
Is it possible to store that documentation on GitLab project's wiki? I know that HTML is not supported in Gollum, which is the Gitlab's wiki engine. Converting HTML to Markdown is not satisfactory because of internal links in HTML files that point to other HTML files.
Should I store documentation in a separate wiki instead and only commit a link to GitLab project wiki?
I guess the answer depends on what you use your HTML documentation for and how much you distribute it.
First of all, which version of Gitlab are you using ?
If it is only for your developers to get access to (aka these are private documents), then on Gitlab >= 8.1.2 there is a service called External Wiki that allows you to substitute the Wiki link of the project by any URL of your choice. Just set up a web server serving your html documentation, have your build server upload the newest version after each build and call it a day.
If your documentation is a code that you want to version control and distribute, then do so.
Whether you go for 1. or 2., it is still a good idea to have a the whole documentation in a separate git repository, because you get compression for free and using git pull is much better than using any rsync to synchronise any local or remote directories. Then is just a matter of setting up shell script or git hooks to automatically do all of it for you; either at commit or build time.
Hope this helps !

Access build.xml in Jenkins via REST api

Each finished build in jenkins has a build.xml file (in work/jobs/...BuildName.../builds/...BuildNumber... with a lot of info about the build. Is it possible to access that file using the rest api? I tried a lot of variations, but I could not find it.
Look in the Jenkins itself for the documentation.
If you access the URL
http://SERVER/job/JOB/api/
You will see the way to use REST api, which can access all elements of your Jenkins (including parameters and logs from the build).
I hope this helps.

Resources