Can some one please provide what is the recommended solution for authenticating inside Dockerfile during docker build phase for Azure Artifacts for Maven, NPM or Python?
While most of the solutions for the particular problem of accessing authenticated feed of Azure Artifacts while building from inside a Dockerfile has been mentioned on Stack Overflow or on the internet (in various articles such as below) have targeted the nuget ecosystem, my question is for Maven, NPM or Python Packages.
Example blogs that provide solutions for nuget & docker-
https://blog.ehn.nu/2019/05/accessing-azure-artifacts-feed-in-a-docker-build/
Appreciate the help here.
My question is for Maven, NPM or Python Packages.
These package types are from different systems, so we should use different ways to authenticate them.
For Maven:
Check set-up-your-feed
. You should configure your pom.xml file to define the url, and then define settings.xml file with PAT. You can manage these two files in your project, then use Dockerfile to copy the settings.xml to ${user.home}/.m2 folder.
For NPM:
You can follow Other topic here to configure one .npmrc which contains credentials. Also I think you can create a normal .npmrc file, and then use Npm Authenticate task to modify it. Here's one working sample.
For Python:
We can use the Pip Authenticate task to populates the PIP_EXTRA_INDEX_URL environment variable and pass it as build arguments. Something like this: arguments: --build-arg INDEX_URL=$(PIP_EXTRA_INDEX_URL)
You can check the two answers from this similar issue for more details.
Related
I have made a ROS workspace and inside a package.
I did catkin_make and everything is working well.
I would like to give this package (or should I give the entire workspace?) to another person.
I am thinking to give him a zip file of the files and folders (it contains launch files, python scripts, rviz files etc) so I am expecting he will unzip it in his machine
I would like he can run the launch files without problems
What is what he needs to do for this? (of course he will have ROS installed, that is no problem)
I am thinking perhaps he should do source devel/setup.bash but is this enough?
When sharing a workspace with somebody only the source space src has to be shared. It should contain all our packages with their launch files (*.launch), Python (*.py) and C++ nodes (*.cpp, *.hpp), YAML configuration files (*.yaml), RViz configurations (*.rviz), robot descriptions (*.urdf, *.xacro) and describe how each node should be compiled in a CMakeLists.txt. Additionally you are supposed to keep track of all the Debian packages you install inside the package.xml file of each package.
If for some obscure reason there are things that I have to do that can't be accommodated in the standard installation instructions given above, I will actually write a bash script that performs these steps for me and add it either to the package itself or the workspace. This way I can automate also more complex steps such as installing OpenCV or modifying the .bashrc. Here a small example of what such a minimal script (I generally name them install_dependencies.sh) might look like:
#!/bin/bash
# Get current workspace
WS_DIR="$(dirname "$(dirname "$(readlink -fm "$0")")")"
# Check if script is run as root '$ sudo ...'
if ["$EUID" -ne 0]
then
echo "Error: This script has to be run as root '$ sudo ./install_dependencies.sh'
exit 1
fi
echo "Installing dependencies..."
# Modify .bashrc
echo "- Modifying '~/.bashrc'..."
echo "source ${WS_DIR}/devel/setup.bash" >> ~/.bashrc
echo ""
echo "Dependencies installed."
If for some reason even that is not possible I make always sure to document it properly either in a Markdown *.md read-me either in a /doc folder inside your package, in the read-me.md inside the base folder of your repository or inside the root folder of your workspace.
The receiver then only has to
Create a new workspace
Copy or clone the package files to its src folder
Install all the Debian package dependencies listed in the package.xml files with $ rosdep install
(If any: Execute the bash scripts I created by hand $ sudo ./install_dependencies.sh or perform the steps given in the documentation)
Build the workspace with $ catkin_make or $ catkin build from catkin-tools
Source the current environment variables with $ source devel/setup.bash
Make sure that the Python nodes are executable either by $ chmod +x <filename> or right-clicking the corresponding Python nodes (located in src or scripts of your package), selecting Properties/Permissions and enabling Allow executing file as program.
Run the desired Python or C++ nodes ($ rosrun <package_name> <executable_name>) and launch files ($ roslaunch <package_name> <launch_file_name>)
It is up to you to share the code as a compressed file, in form of a Git repository or a more advanced way (see below) but I will introduce some best practices in the following paragraphs that will pay off in the long run with larger packages.
Sharing a package or sharing a workspace?
One can either share a single package or an entire workspace. I personally think that most of the time one should share the entire workspace instead of the package alone even if you only cloned the other packages from a public Github repo. This might save the receiver a lot of headache e.g. when checking out the wrong branch.
Version control with Git
Arguably the best way to arrange your packages is by using Git. I'd actually make a repository for every package you create (if a couple of packages are virtually inseparable you can also bundled them to a single Git repo or better use metapackages). Then create an additional repository for your workspace and include your own packages and packages from other sources as submodules. This allows your code to be modular and re-usable: You can share only a package or the entire workspace!
As a first step I always add a .gitignore file to each package repository which excludes *.pyc files and another one to the workspace repository that ignores the build, devel and install folders.
You can add a particular repository as submodule to your workspace Git repository by opening a console inside the src folder of your workspace repository and typing
$ git submodule add -b <branch_name> <git_url_to_package> <optional_directory_rename>
Note that you can actually track a particular branch of a repository that you include as a submodule. In case you need a submodule at some point follow this guide.
If you share the workspace repository with someone they will have to have access to each individual submodule repository and they will have to not only pull the repository but also update the submodules with
$ git clone --recurse-submodules <git_url_to_workspace_repository>
and potentially update them to the latest commit with
$ git submodule update --remote
After these two steps they should have a full version of the repository with submodules and they should be able to progress with the steps listed in the section above.
1.1 Unit-testing and continuous integration
Before sharing a repository you will have to verify that everything is working correctly. This can take a decent amount of time, in particular if the code base is large and you are modifying it frequently. In the ideal case you would have to install it on a brand new machine or inside a virtual box in order to make sure that the set-up works which would take quite some time. This is where unit testing comes into play: For every class and function you program you will write a test. This way you can simply run these tests and make sure everything is working correctly. Generally these unit tests will be performed automatically and the working branches merged continuously. Generally the test routines are written with the libraries Boost::Test (C++), GoogleTest (generally used in ROS with C++), unittest (for Python) and QtTest (for GUIs). For ROS launch files there is additionally rostest. How this can be done in ROS is described here and here.
ROSjects
If you do not even want the person you are sending the code to to go through the hassle to set it up you might consider sending them a ROSject. A ROSject is an online virtual ROS environment (by the guys behind The Construct, the main source of ROS courses and of ROS tutorials on Youtube) that can be created and shared very easily from your existing Git repository as can be seen here. The simulation runs entirely in the cloud on a virtual machine. This way the potential of failure is very low but it is not a choice if your code is supposed to run on hardware and not only in simulation.
Docker
If your installation procedure is complex you might as well use a container such as a Docker.
More information about using Docker in combination with ROS can be found here. The Docker container might introduce though a bit of overhead and it is probably no choice for code which should have real-time priority in combination with a real-time patched operating system.
Debian or snap package
Another way of sending somebody a ROS package is by packing it into a Debian or snap package. This process takes a while and is in particular favourable if you want to give your code to a large number of users that should use the code out of the box. Instructions on how this can be done for Debian packages can be found here and here, while a guide for snap can be found here.
I am setting up a CI/CD pipeline scenario for SCP NEO environment based on the prebuilt pipeline on Project Piper. I tried to execute a pre-built library called Project Piper for Jenkins and I got the following error.The error seems neo.sh is not found. But I downloaded neo SDK and placed it in the neo-sdk folder. Also neo.sh is available inside /opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools folder in linux
Please see error in Jenkins
please see .pipeline/config file where that location is referenced
Docker is not used and I set-up Jenkins in ubuntu inside Vmware virtual machine.If the docker is not available,the library is capable of running locally in Jenkins server.
I am keeping neo-sdk tool in a local folder which contain neo.sh which is used to deploy application to SAP Cloud Platform.I am not writing any script my own as everything is prebuilt scripts from Project piper
As already state in the GH issue you should extend your PATH env var to also look inside /opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools.
You do this by executing export PATH=$PATH:/opt/sap/neo-sdk/neo-java-web-sdk-3.39.10/tools.
Or an even better way would be to symlink the neo.sh into a folder that is already on the PATH.
With echo $PATH you can display the env var and have a look which directories are already exposed.
Issue is solved and thanks both of you for the same. I used envInjecter Plugin in Jenkins. Then go to manage jenkins->Configure->Set environment variables and set path as in
For more detail see the comment from XP84 in this StackOverflow link
I have a monorepo with some backend (Node.js) and frontend (Angular) services. Currently my deployment process looks like this:
Check if tests pass
Build docker images for my services
Push docker images to container registry
Apply changes to Kubernetes cluster (GKE) with kubectl
I'm aiming to automate all those steps with the help of Bazel and Cloud Build. But I am really struggling to get started with Bazel:
To make it work I'll probably need to add a WORKSPACE file with my external dependencies and multiple BUILD files for my own packages/services? I need help with the actual implementation:
How to build my Dockerfiles with Bazel?
How push those images into a registry (preferably GCR)?
How to apply changes to Google Kubernetes Engine automatically?
How to integrate this toolchain with Google Cloud Build?
More information about the project
I've put together a tiny sample monorepo to showcase my use-case
Structure
├── kubernetes
├── packages
│ ├── enums
│ ├── utils
└── services
├── gateway
General
Gateway service depends on enums and utils
Everything is written in Typescript
Every service/package is a Node module
There is a Dockerfile inside the gateway folder, which I want to be built
The Kubernetes configuration are located in the kubernetes folder.
Note, that I don't want to publish any npm packages!
What we want is a portable Docker container that holds our Angular app along with its server and whatever machine image it requires, that we can bring up on any Cloud provider, We are going to create an entire pipeline to be incremental. "Docker Rules" are fast. Essentially, it provides instrumentality by adding new Docker layers, so that the changes you make to the app are the only things sent over the wire to the cloud host. In addition, since Docker images are tagged with a SHA, we only re-deploy images that changed. To manage our production deployment, we will use Kubernetes, for which Bazel rules also exist. Building a docker image from Dockerfile using Bazel is not possible to my knowledge because
it's by design not allowed due to non-hermetic nature of Dockerfile. (Source:
Building deterministic Docker images with Bazel)
The changes done as part of the source code are going to get deployed in the Kubernetes Cluster, This is one way to achieve the following using Bazel.
We have to put Bazel in watch mode, Deploy replace tells the Kubernetes cluster to update the deployed version of the app.
a.
Command : ibazel run :deploy.replace
In case there are any source code changes do it in the angular.
Bazel incrementally re-builds just the parts of the build graph that depend on the changed file, In this case, that includes the ng_module that was changed, the Angular app that includes that module, and the Docker nodejs_image that holds the server. As we have asked to update the deployment, after the build is complete it pushes the new Docker container to Google Container Registry and the Kubernetes Engine instance starts serving it. Bazel understands the build graph, it only re-builds what is changed.
Here are few Snippet level tips, which can actually help.
WORKSPACE FILE:
Create a Bazel Workspace File, The WORKSPACE file tells Bazel that this directory is a "workspace", which is like a project root. Things that are to be done inside the Bazel Workspace are listed below.
• The name of the workspace should match the npm package where we publish, so that these imports also make sense when referencing the published package.
• Mention all the rules in the Bazel Workspace using "http_archive" , As we are using the angular and node the rules should be mentioned for rxjs, angular,angular_material,io_bazel_rules_sass,angular-version,build_bazel_rules_typescript, build_bazel_rules_nodejs.
• -Next we have to load the dependencies using "load". sass_repositories, ts_setup_workspace,angular_material_setup_workspace,ng_setup_workspace,
• Load the docker base images also , in our case its "#io_bazel_rules_docker//nodejs:image.bzl",
• Dont forget to mention the browser and web test repositaries
web_test_repositories()
browser_repositories(
chromium = True,
firefox = True,
)
"BUILD.bazel" file.
• Load the Modules which was downloaded ng_module, the project module etc.
• Set the Default visiblity using the "default_visibility"
• if you have any Jasmine tests use the ts_config and mention the depndencies inside it.
• ng_module (Assets,Sources and Depndeencies should be mentioned here )
• If you have Any Lazy Loading scripts mention it as part of the bundle
• Mention the root directories in the web_package.
• Finally Mention the data and the welcome page / default page.
Sample Snippet:
load("#angular//:index.bzl", "ng_module")
ng_module(
name = "src",
srcs = glob(["*.ts"]),
tsconfig = ":tsconfig.json",
deps = ["//src/hello-world"],
)
load("#build_bazel_rules_nodejs//:future.bzl", "rollup_bundle")
rollup_bundle(
name = "bundle",
deps = [":src"]
entry_point = "angular_bazel_example/src/main.js"
)
Build the Bundle using the Below command.
bazel build :bundle
Pipeline : through Jenkins
Creating the pipeline through Jenkins and to run the pipeline there are stages. Each Stage does separate tasks, But in our case we use the stage to publish the image using the BaZel Run.
pipeline {
agent any
stages {
stage('Publish image') {
steps {
sh 'bazel run //src/server:push'
}
}
}
}
Note :
bazel run :dev.apply
Dev Apply maps to kubectl apply, which will create or replace an existing configuration.(For more information see the kubectl documentation.) This applies the resolved template, which includes republishing images. This action is intended to be the workhorse of fast-iteration development (rebuilding / republishing / redeploying).
If you want to pull containers using the workpsace file use the below tag
container_pull(
name = "debian_base",
digest = "sha256:**",
registry = "gcr.io",
repository = "google-appengine/debian9",
)
If GKE is used, the gcloud sdk needs to be installed and as we are using GKE(Google Contianer Enginer), It can be authenticated using the below method.
gcloud container clusters get-credentials <CLUSTER NAME>
The Deploymnet Object should be mentioned in the below format:
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "dev",
kind = "deployment",
template = ":deployment.yaml",
images = {
"gcr.io/rules_k8s/server:dev": "//server:image"
},
)
Sources :
https://docs.bazel.build/versions/0.19.1/be/workspace.html
https://github.com/thelgevold/angular-bazel-example
https://medium.com/#Jakeherringbone/deploying-an-angular-app-to-kubernetes-using-bazel-preview-91432b8690b5
https://github.com/bazelbuild/rules_docker
https://github.com/GoogleCloudPlatform/gke-bazel-demo
https://github.com/bazelbuild/rules_k8s#update
https://codefresh.io/howtos/local-k8s-draft-skaffold-garden/
https://github.com/bazelbuild/rules_k8s
A few months later and I've gone relatively far in the whole process.
Posting every detail here would just be too much!
So here is the open-source project which has most of the requirements implemented: https://github.com/flolu/fullstack-bazel
Feel free to contact me with specific questions! :)
Good luck
Flo, have you considered using terraform and a makefile for auto-building the cluster?
In my recent project, I automated infrastructure end to end with make & terraform. Essentially, that approach builds the entire cluster, build and deploys the entire project with one single command within 3 - 5 minutes. Depends on how fast gcp is on a given day.
There is a google sample project showing the idea although the terraform config is outdated and needs to be replaced with a config adhering to the current 0.13 / 0/14 syntax.
https://github.com/GoogleCloudPlatform/gke-bazel-demo#build--deploy-with-bazel
The makefile that enables the one-command end to end automation:
https://github.com/GoogleCloudPlatform/gke-bazel-demo/blob/master/Makefile
Again, replace or customize the scripts for your project; I actually wrote two more scripts, one for checking / installing requirements on the client i.e. git / kubctl & gcloud, and another one for checking or configuring & authentication gcloud in case it's not yet configured and authenticated. From there, the terraform script takes over and build the entire cluster and once that's done, the usual auto-deployment kicks in.
I find the idea of layering make over terraform & bazel for end to end automation just brilliant.
I would like to know, how can I install a plugin to Jenkins, using the Jenkins Remote
access API?
I found a way to install using jenkins CLI. But I need to know how to do the same using API.
I tried using jenkins-python library. But I did not find any way to
install plugin there.
Send (HTTP POST) the following xml data (with your plugin-id#version) to Jenkins plugin manager. Check out my jenkins install plugin script on gist.
This HTTP POST request install jenkins git plugin 2.0.
curl -X POST -d '<jenkins><install plugin="git#2.0" /></jenkins>' --header 'Content-Type: text/xml' http://localhost:8080/pluginManager/installNecessaryPlugins
Some plugins are hard to update on the file system because others depend on it (credentials is one example). For such plugins it is only possible to update them using the web interface.
Jenkins frontend has a page under 'Manage Jenkins' -> 'Manage Plugins'. Under the 'Advanced' tab is a form to 'uploadPlugin'. It allows web automation with curl, you might need to add authentication.
curl -i -F file=#pluginfilename.hpi http://jenkinshost/jenkins/pluginManager/uploadPlugin
In addition to the methods already mentioned (I personally used the "curl uploadPlugin" one provided by #bbaassssiiee), you need to consider that if you use pluginManager Jenkins will try to load your plugin dinamically, but in case you need to restart Jenkins to initialize the plugin properly (this was my case), you should add:
curl -kX POST https://${JENKINS_URL}/safeRestart
In case you copy the plugin directly to jenkins/plugin, the restart is mandatory for the plugin to be loaded.
As suggested by malenkiy_scot, we can create a job and use the Jenkins CLI. Here is the secret way I do for my automation in installing plugins. Jenkins plugins are available in the Jenkins mirror here: http://updates.jenkins-ci.org/latest This link might not list anything but you can download the plugin if you know the name of the plugin. For example, if you want to download the skype-notifier plugin, you can download it from http://updates.jenkins-ci.org/latest/skype-notifier.hpi The generic URL is "http://updates.jenkins-ci.org/latest/.hpi"
After downloading that plugin, it should go to the "plugins" directory in Jenkins home on the server. For linux machine, it will most likely be in "/var/lib/jenkins/plugins". Simple example
wget http://updates.jenkins-ci.org/latest/skype-notifier.hpi
mv skype-notifier.hpi /var/lib/jenkins/plugins
There are two things to note here:
If the plugin has any dependencies, those will not be installed by default. If you know what other plugins are required, those can be installed the same way. A bit of manual process is required here. But if a same set of plugins are required, the dependency can be resolved just once and script can be written to download and move them to the Jenkins home.
Downloaded plugins cannot be used right away. A reload of Jenkins is required.
After a lot of blood sweat and tears my suggested solution is:
Download the hpi files (plugin and dependencies) using plugin-installation-manager-tool
(requires java) or install-plugins.sh (requires bash only, but is officially deprecated, though still working 09/2021)
Note: Both are also contained in official docker image (see also Offline Installations)
Then install all downloaded files via
curl -i -F file=#plugin.hpi http://${JENKINS_URL}/pluginManager/uploadPlugin
Why?
POSTing to /pluginManager/installNecessaryPlugins always installs latest version (known bug or feature?) and seems to only install the requested plugin without proper dependency handling.
Simple example
Requires install-plugins.sh and its dependency jenkins-support from jenkinsci/docker.
You have do adapt install-plugins.sh line 27 to point to your jenkins-support file, e.g.
. jenkins-support if you have everything in one folder and execute it from there.
pluginFolder=$(mktemp -d)
# Download plugins
JENKINS_UC=https://updates.jenkins.io REF="${pluginFolder}" \
install-plugins.sh \
docker-workflow:1.26 docker-plugin:1.2.2
# add more plugins in here, pass a bash array or load from file
# (see Real-life example bellow)
# Install all downloaded plugin files via HTTP
for pluginFile in "${pluginFolder}/plugins"/*; do
curl -i -F "file=#${pluginFile}" http://${JENKINS_URL}/pluginManager/uploadPlugin
done
Real-life example
Taken from cloudogu/gitops-playground.
download-plugins.sh - loads all plugins declared in plugins.txt using install-plugins.sh to a directory passed as parameter.
init-jenkins.sh calls download-plugins.sh, then installs the plugins using jenkins-REST-client.sh
I do not think this is possible. However, as a workaround you may consider creating a job that would install plugins via Jenkins CLI; you then can invoke that job via the API with appropriate parameters.
I would like to know, how can I install a plugin to Jenkins, using the Jenkins Remote
access API?
I found a way to install using jenkins CLI. But I need to know how to do the same using API.
I tried using jenkins-python library. But I did not find any way to
install plugin there.
Send (HTTP POST) the following xml data (with your plugin-id#version) to Jenkins plugin manager. Check out my jenkins install plugin script on gist.
This HTTP POST request install jenkins git plugin 2.0.
curl -X POST -d '<jenkins><install plugin="git#2.0" /></jenkins>' --header 'Content-Type: text/xml' http://localhost:8080/pluginManager/installNecessaryPlugins
Some plugins are hard to update on the file system because others depend on it (credentials is one example). For such plugins it is only possible to update them using the web interface.
Jenkins frontend has a page under 'Manage Jenkins' -> 'Manage Plugins'. Under the 'Advanced' tab is a form to 'uploadPlugin'. It allows web automation with curl, you might need to add authentication.
curl -i -F file=#pluginfilename.hpi http://jenkinshost/jenkins/pluginManager/uploadPlugin
In addition to the methods already mentioned (I personally used the "curl uploadPlugin" one provided by #bbaassssiiee), you need to consider that if you use pluginManager Jenkins will try to load your plugin dinamically, but in case you need to restart Jenkins to initialize the plugin properly (this was my case), you should add:
curl -kX POST https://${JENKINS_URL}/safeRestart
In case you copy the plugin directly to jenkins/plugin, the restart is mandatory for the plugin to be loaded.
As suggested by malenkiy_scot, we can create a job and use the Jenkins CLI. Here is the secret way I do for my automation in installing plugins. Jenkins plugins are available in the Jenkins mirror here: http://updates.jenkins-ci.org/latest This link might not list anything but you can download the plugin if you know the name of the plugin. For example, if you want to download the skype-notifier plugin, you can download it from http://updates.jenkins-ci.org/latest/skype-notifier.hpi The generic URL is "http://updates.jenkins-ci.org/latest/.hpi"
After downloading that plugin, it should go to the "plugins" directory in Jenkins home on the server. For linux machine, it will most likely be in "/var/lib/jenkins/plugins". Simple example
wget http://updates.jenkins-ci.org/latest/skype-notifier.hpi
mv skype-notifier.hpi /var/lib/jenkins/plugins
There are two things to note here:
If the plugin has any dependencies, those will not be installed by default. If you know what other plugins are required, those can be installed the same way. A bit of manual process is required here. But if a same set of plugins are required, the dependency can be resolved just once and script can be written to download and move them to the Jenkins home.
Downloaded plugins cannot be used right away. A reload of Jenkins is required.
After a lot of blood sweat and tears my suggested solution is:
Download the hpi files (plugin and dependencies) using plugin-installation-manager-tool
(requires java) or install-plugins.sh (requires bash only, but is officially deprecated, though still working 09/2021)
Note: Both are also contained in official docker image (see also Offline Installations)
Then install all downloaded files via
curl -i -F file=#plugin.hpi http://${JENKINS_URL}/pluginManager/uploadPlugin
Why?
POSTing to /pluginManager/installNecessaryPlugins always installs latest version (known bug or feature?) and seems to only install the requested plugin without proper dependency handling.
Simple example
Requires install-plugins.sh and its dependency jenkins-support from jenkinsci/docker.
You have do adapt install-plugins.sh line 27 to point to your jenkins-support file, e.g.
. jenkins-support if you have everything in one folder and execute it from there.
pluginFolder=$(mktemp -d)
# Download plugins
JENKINS_UC=https://updates.jenkins.io REF="${pluginFolder}" \
install-plugins.sh \
docker-workflow:1.26 docker-plugin:1.2.2
# add more plugins in here, pass a bash array or load from file
# (see Real-life example bellow)
# Install all downloaded plugin files via HTTP
for pluginFile in "${pluginFolder}/plugins"/*; do
curl -i -F "file=#${pluginFile}" http://${JENKINS_URL}/pluginManager/uploadPlugin
done
Real-life example
Taken from cloudogu/gitops-playground.
download-plugins.sh - loads all plugins declared in plugins.txt using install-plugins.sh to a directory passed as parameter.
init-jenkins.sh calls download-plugins.sh, then installs the plugins using jenkins-REST-client.sh
I do not think this is possible. However, as a workaround you may consider creating a job that would install plugins via Jenkins CLI; you then can invoke that job via the API with appropriate parameters.