kubeflow - what do you need to get started? - kubeflow

There is Installing Kubeflow has multiple links to environment dependent installation documents but there is no clear overview on what we need to get started.
Are these all we need on *NIX based system?
A K8S cluster (either local or remote) and KUBECONFIG environment pointing to the kubeconfig file for the cluster.
Installation of kubectl in local.
Installation of kfctl in local.
Python 3 in local. Kubeflow Pipelines SDK API does not specify any version, so any Python 3.x is OK?
Installation of Docker tools in local to create docker images.
Installation of kubeflow pipeline SDK pip3 install kfp --upgrade in local.

Related

Install docker compose as any other plugin

If docker compose V2 is a plugin, and docker plugins can be installed from a registry with the docker plugin install subcommand, why isn't the very docker-compose plugin published as such so that docker plugin install docker/compose could work?
Why the installation instructions point to downloading a release blob from github and manually placing it inside the $DOCKER_CONFIG/cli-plugins folder instead?
They are different kinds of plugins. One is a CLI plugin that interfaces directly with the docker command. The other are specific extensions to the volumes, networking, and other parts of the dockerd engine that runs on the container host. Installing a CLI plugin directly is an interim workflow, designed for developers, while packaging is finished.
With current supported releases, you can see the docker-compose-plugin package in the docker repositories which would be my preferred installation method.

Is it possible to not use _any_ Docker image in CI mode for GitLab?

I've got a repository that has a series of documents (multimarkdown files, PDFs, GSN arguments, etc.) that need to use our internal (currently) proprietary tool to assemble those documents into a HTML-like document. The internal tool is quite complicated to use and isn't (yet) deployable.
What I tried doing was compiling the internal tool on the Ubuntu VM that I knew would be used for this job and then not tell GitLab (we're using self-hosted GitLab) to use any docker image when it tried to assemble the documents. Alas, when the CI job was run, I saw:
Pulling docker image alpine:latest ...
And then, of course, none of the stuff I installed on the VM itself was available.
Is there a way to have GitLab run the CI job without any Docker image?
If not (or if this alternative is just plain "better"), what is a good resource for reading how to install this complicated internal tool into a Docker image?
NB: The current methodology for "installing" the complicated internal tool, in addition to a lot of installing packages via apt-get, etc., (which I already have examples of how to do in Docker), is to clone the repository, and then run npm install and rake install in the cloned directory.
This is controlled by your GitLab-runner configuration. When the runner uses the docker executor it will always use a docker image for the build. If you want to run a GitLab job without using docker, you will need to configure a GitLab runner with the "shell" executor on your VM.
However, using image: ubuntu:focal or similar is likely enough. You usually don't have to be concerned about the fact that an executor happens to run your job inside of a container. This is also beneficial, as it means your build environment is reproducible and that process will be defined in your job.
myjob:
image: ubuntu:focal
script:
- apt update && apt install -y nodejs ruby # or whatever else
# - npm install
# - gem install
# - rake install
# etc...
-
Or better yet, if you can produce a docker image with your core dependencies installed you can just use image: my-special-image in your GitLab job to use that image as your build environment.

Install Docker inside Jenkins 2.17

I'm running Jenkins version 2.176.3 on Openshift online. And I want to build a pipeline which uses Docker commands to build the image. When I tried to build it gives me an error saying Docker command not found.
I think that is because I don't have Docker installed in Jenkins. I tried to do that using the Jenkins Plugin Manager but the Docker plugin requires Jenkins version 2.19 or later.
I also tried accessing the Jenkins container using oc CLI and tried to install Docker but did not work.
So what would be the best method for me to install Docker inside Jenkins?
The error means you need to have/install docker inside your agent/slave image. For a test purpose try to run your pipeline with docker images, which already contain docker tool.

Jenkins 2.99 on ICP 2.1

I have installed jenkins 2.99 on my ICP V2.1. I have configured a pipeline job to build docker images and push to the local repository in a jenkinsfile, But the docker command is not getting recognised. I am getting the error
docker build -t <tag> .
/<>/script.sh: docker: not found
If docker has to be installed separately, how do we install?
Considering ICP (IBM Cloud Private) is an application platform for developing and managing on-premises, containerized applications, docker should be installed already.
Check, outside of Jenkins, that docker is recognized.
which docker
Then, in the Jenkins page displaying the Job result, check the Environment variable section, and see if the PATH would include the folder where docker is installed.

Dockerfile with multiple images

What I want to build, without Docker, would look like this:
AWS EC2 Debian OS where I can install:
Python 3
Anaconda
Run python scripts that will create new files as output
SSH into it so that I can explore the new files created
I am starting to use Docker an my first approach was to build everything into a Dockerfile but I don't know how to add multiple FROM so that I can install the official docker image of Debian, Python3 and Anaconda.
In addition, with this approach is it possible to get into the container and explore the files that have been created?

Resources