Install docker compose as any other plugin - docker

If docker compose V2 is a plugin, and docker plugins can be installed from a registry with the docker plugin install subcommand, why isn't the very docker-compose plugin published as such so that docker plugin install docker/compose could work?
Why the installation instructions point to downloading a release blob from github and manually placing it inside the $DOCKER_CONFIG/cli-plugins folder instead?

They are different kinds of plugins. One is a CLI plugin that interfaces directly with the docker command. The other are specific extensions to the volumes, networking, and other parts of the dockerd engine that runs on the container host. Installing a CLI plugin directly is an interim workflow, designed for developers, while packaging is finished.
With current supported releases, you can see the docker-compose-plugin package in the docker repositories which would be my preferred installation method.

Related

Is it possible to not use _any_ Docker image in CI mode for GitLab?

I've got a repository that has a series of documents (multimarkdown files, PDFs, GSN arguments, etc.) that need to use our internal (currently) proprietary tool to assemble those documents into a HTML-like document. The internal tool is quite complicated to use and isn't (yet) deployable.
What I tried doing was compiling the internal tool on the Ubuntu VM that I knew would be used for this job and then not tell GitLab (we're using self-hosted GitLab) to use any docker image when it tried to assemble the documents. Alas, when the CI job was run, I saw:
Pulling docker image alpine:latest ...
And then, of course, none of the stuff I installed on the VM itself was available.
Is there a way to have GitLab run the CI job without any Docker image?
If not (or if this alternative is just plain "better"), what is a good resource for reading how to install this complicated internal tool into a Docker image?
NB: The current methodology for "installing" the complicated internal tool, in addition to a lot of installing packages via apt-get, etc., (which I already have examples of how to do in Docker), is to clone the repository, and then run npm install and rake install in the cloned directory.
This is controlled by your GitLab-runner configuration. When the runner uses the docker executor it will always use a docker image for the build. If you want to run a GitLab job without using docker, you will need to configure a GitLab runner with the "shell" executor on your VM.
However, using image: ubuntu:focal or similar is likely enough. You usually don't have to be concerned about the fact that an executor happens to run your job inside of a container. This is also beneficial, as it means your build environment is reproducible and that process will be defined in your job.
myjob:
image: ubuntu:focal
script:
- apt update && apt install -y nodejs ruby # or whatever else
# - npm install
# - gem install
# - rake install
# etc...
-
Or better yet, if you can produce a docker image with your core dependencies installed you can just use image: my-special-image in your GitLab job to use that image as your build environment.

Is there a way to have installed all the packages of a docker image in a specific directory?

I'm trying to connect Pycharm to a docker container where gcloud is installed. In order to run my project I need to set the path where gcloud is installed.
Is there a way to have installed all the packages in a specific directory when creating a docker container?
If not, how can I look for the path where gcloud has been installed?
Edit, found answer for the second question:
After entering in the container's shell is possible to find where Google Cloud SDK is installed.
Anyway, being able to install all the content of the docker image in a specific directory would be still preferable.

Install Docker inside Jenkins 2.17

I'm running Jenkins version 2.176.3 on Openshift online. And I want to build a pipeline which uses Docker commands to build the image. When I tried to build it gives me an error saying Docker command not found.
I think that is because I don't have Docker installed in Jenkins. I tried to do that using the Jenkins Plugin Manager but the Docker plugin requires Jenkins version 2.19 or later.
I also tried accessing the Jenkins container using oc CLI and tried to install Docker but did not work.
So what would be the best method for me to install Docker inside Jenkins?
The error means you need to have/install docker inside your agent/slave image. For a test purpose try to run your pipeline with docker images, which already contain docker tool.

Jenkins 2.99 on ICP 2.1

I have installed jenkins 2.99 on my ICP V2.1. I have configured a pipeline job to build docker images and push to the local repository in a jenkinsfile, But the docker command is not getting recognised. I am getting the error
docker build -t <tag> .
/<>/script.sh: docker: not found
If docker has to be installed separately, how do we install?
Considering ICP (IBM Cloud Private) is an application platform for developing and managing on-premises, containerized applications, docker should be installed already.
Check, outside of Jenkins, that docker is recognized.
which docker
Then, in the Jenkins page displaying the Job result, check the Environment variable section, and see if the PATH would include the folder where docker is installed.

Docker CLI in Freestyle Build Shell

I'd like to run the postgres docker image instead of the postgres binary provided by cloudbees due to the lack of uuid and postgis support. However, the only two docker plugins I have access to are CloudBees Docker Custom Build Environment Plugin and Docker Commons Plugin. I'd like to avoid the extra complexity associated with the custom build env plugin due to the need to setup docker in docker(outer image for the slave and inner image for postgres).

Resources