I have a python app that needs access to private repository which is mentioned in the Dockerfile like this:
RUN --mount=type=ssh pip install -r requirements.txt
I have followed instruction from this official docker docs and things are working fine when I do
docker build --ssh default=C:\Users\Ravi.Kumar\.ssh\id_rsa -t somename:latest . from the command line in host machine.
Now I am trying to get this to work using the VSCode Remote Container extension. I am getting this in the logs when opening the project in a container using the Remote Contaienr extension:
Container server: Remote to local stream terminated with error: {
message: 'connect ENOENT \\\\.\\pipe\\openssh-ssh-agent',
name: 'Error',
stack: 'Error: connect ENOENT \\\\.\\pipe\\openssh-ssh-agent\n' +
'\tat PipeConnectWrap.afterConnect [as oncomplete] (net.js:1146:16)'
}
Also then the remote container starts, I can see this is the docker build command being used:
Start: Run: docker build -f d:\Code\somename\Dockerfile -t vsc-somename-8afa92e4f821805c825a5facd311c4f9 d:\Code\somename
devcontainer.json file:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.217.4/containers/docker-existing-dockerfile
{
"name": "Existing Dockerfile",
"context": "..",
"dockerFile": "../Dockerfile",
"settings": {},
"build": {},
"extensions": []
}
Question: How do I tell Remote Container extension to use the --ssh arg in the docker build command.
I think this reference page can help you find the right syntax to customize docker commands using the devcontainer.json.
Unfortunately, it seems you can't specify arguments to be passed to your docker build command directly.
You can only pass in build-args:
"build": { "args": { "MYARG": "MYVALUE"} }
A potential workaround to your problem can be to build the image using the command line you mentionned, and then running Attach to running container... vscode action to work inside it from your vscode instance.
Related
Installed Versions
Application
Version
Docker
19.03.6, build 369ce74a3c
Docker Compose
v2.13.0
OS
Ubuntu 18.04.2 LTS
Docker definitions
docker-compose.yml
version: "2.3"
services:
builder:
build:
context: ./
dockerfile: Dockerfile
args:
- NODE_VERSION=${NODE_VERSION:-12.22.7}
image: redacted/node:${NODE_VERSION}
volumes:
- ./:/code
environment:
- BUILDKIT_PROGRESS=plain
.env
CLIENT=${CLIENT_PREFIX:-xx}
PUBLIC_URL=/${CLIENT}-dashboard
REACT_APP_NODEJS_SERVER=${CLIENT}-server
NODE_VERSION=12.22.7
Dockerfile
ARG NODE_VERSION="12.22.7"
FROM node:${NODE_VERSION}
VOLUME ["/code"]
WORKDIR /code
CMD "/code/build_ui.sh"
Issue description
Our project requires multiple versions of node & npm installed. To avoid compatibility issues, we are trying to use docker to stabilize the versions we need.
We use the below command to run the build for our application:
docker-compose run --rm builder
This works on some of our servers, but on some servers, I get either of the below errors:
failed to solve: failed to solve with frontend dockerfile.v0: failed to build llb: failed to load cache key: rpc error: code = unknown desc = error getting credentials - err: exit status 1, out: cannot autolaunch d-bus without x11 $display ubuntu.
Fixed this by following the guide here. However, given that I was trying to pull node and that is a public repo, I don't understand why docker was attempting a login. And why didn't this happen on all servers?
docker run Error response from daemon: No command specified
Fixed this temporarily by manually building the image using docker build command. But I really want the docker-compose to be able to build the image if the image doesn't exist.
I was expecting docker-compose to build the image on first run without issues and run the build_ui.sh on container execution.
When I get the above errors, if I manually build the docker image (and not wait for docker-compose to build it) using the below command, and then use the docker-compose run command it works.
docker build -t redacted/node:12.22.7 .
I am trying to figure out why docker-compose is not building the image correctly when the image doesn't exist.
I've created a super simple Docker image. When I use that image in Gitlab through a .gitlab-ci.yml file, the Gitlab-"script:" part gets never executed. It's always:
Executing "step_script" stage of the job script
Cleaning up project directory and file based variables
If I add a "report:" entry to my yml, I get for the last line an "Uploading artifacts for failed job".
It seems as if the bash inside the Docker image is somehow broken, but I don't see how, since I can use docker run MyImage <command> to succesfully run bash commands.
Also, Gitlab lets the pipeline run indefinetly after the last line, never ending it. I never experienced this with other Docker images.
Do I have to modify some rights, or something? I can run e.g. the official gradle Docker image, but not mine, anyone has an idea why?
My simple .gitlab-ci.yml:
image:
name: <... My Image ...>
stages:
- build
build-stage:
stage: build
script:
- echo "Testing echo"
My simple Dockerfile:
FROM ubuntu:20.10
CMD ["bash"]
The problem was that my host system is Apple Silicon, while the target Gitlab server runs on AMD64. So I created linux/arm64, not linux/amd64 images. Setting the system explicitly, like
docker build --platform linux/amd64 -t MyImageName .
fixed it.
A big problem is Gitlab, which just fails, without any notification on the log output like "wrong architecture".
On a Linux system I am running a simple test job from the command line using the following command:
gitlab-runner exec docker --builds-dir /home/project/buildsdir test_job
with the following job definition in .gitlab-ci.yml:
test_job:
image: python:3.8-buster
script:
- date > time.dat
However, the build folder is empty. after having run the job. I only can imaging that build-dir means a location inside the docker image.
Also after having run the job successfully I am doing
docker image ls
and I do not see a recent image.
So how can I "share"/"mount" the actual build folder for the docker gitlab job to the hosts system so I can access all the output files?
I looked at the documentation and I found nothing, the same for
gitlab-runner exec docker --help
I also tried to use artifcats
test_job:
image: python:3.8-buster
script:
- pwd
- date > time.dat
artifacts:
paths:
- time.dat
but that also did not help. I was not able to find the file time.dat anywhere after the completion of the job.
I also tried to use docker-volumes:
gitlab-runner exec docker --docker-volumes /home/project/buildsdir/:/builds/project-0 test_job
gitlab-runner exec docker --docker-volumes /builds/project-0:/home/project/buildsdir/ test_job
but neither worked (job failed in both cases).
you have to configure your config.toml file located at /etc/gitlab-runner/
here's the doc: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section
first add a build_dir and mention it in the volumes at the end bind it with a
directory on your host machine like this:
build_dir = "(Your build dir)"
[runners.docker]
volumes = ["/tmp/build-dir:/(build_dir):rw"]
We want to use Paketo.io / CloudNativeBuildpacks (CNB) GitLab CI in the most simple way. Our GitLab setup uses an AWS EKS cluster with unprivileged GitLab CI Runners leveraging the Kubernetes executor. We also don't want to introduce security risks by using Docker in our builds. So we don't have our host’s /var/run/docker.sock exposed nor want to use docker:dind.
We found some guides on how to use Paketo with GitLab CI like this https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/ . But as described beneath the headline Use Cloud Native Buildpacks with GitLab in GitLab Build Job WITHOUT Using the GitLab Build Template, the approach relies on Docker and pack CLI. We tried to resemble this in our .gitlab-ci.yml which looks like this:
image: docker:20.10.9
stages:
- build
before_script:
- |
echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)"
apk add --no-cache curl
(curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.21.1/pack-v0.21.1-linux.tgz" | tar -C /usr/local/bin/ --no-same-owner -xzv pack)
build-image:
stage: build
script:
- pack --version
- >
pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest
--builder paketobuildpacks/builder:base
--path .
But as outlined our setup does not support docker and we end up with the following error inside our logs:
...
$ echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)" # collapsed multi-line command
install pack CLI (see https://buildpacks.io/docs/tools/pack/)
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r0)
(4/4) Installing curl (7.79.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 12 MiB in 26 packages
pack
$ pack --version
0.21.1+git-e09e397.build-2823
$ pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest --builder paketobuildpacks/builder:base --path .
ERROR: failed to build: failed to fetch builder image 'index.docker.io/paketobuildpacks/builder:base': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Any idea on how to use Paketo Buildpacks with GitLab CI without having Docker present inside our GitLab Kubernetes runners (which seems to be kind of a best practice)? We also don't want our setup to become to complex - e.g. by adding kpack.
TLDR;
Use the Buildpack's lifecycle directly inside your .gitlab-ci.yml here's a fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
The details: "using the lifecycle directly"
There are ongoing discussions about this topic. Especially have a look into https://github.com/buildpacks/pack/issues/564 and https://github.com/buildpacks/pack/issues/413#issuecomment-565165832. As stated there:
If you're looking to build images in CI (not locally), I'd encourage
you to use the lifecycle directly for that, so that you don't need
Docker. Here's an example:
The link to the example is broken, but it refers to the Tekton implementation on how to use buildpacks in a Kubernetes environment. Here we can get a first glue about what Stephen Levine referred to as "to use the lifecycle directly". Inside it the crucial point is the usage of command: ["/cnb/lifecycle/creator"]. So this is the lifecycle everyone is talking about! And there's good documentaion about this command that could be found in this CNB RFC.
Choosing a good image: paketobuildpacks/builder:base
So how to develop a working .gitlab-ci.yml? Let's start simple. Digging into the Tekton implementation you'll see that the lifecycle command is executed inside an environment defined in BUILDER_IMAGE, which itself is documented as The image on which builds will run (must include lifecycle and compatible buildpacks). That sound's familiar! Can't we simply pick the builder image paketobuildpacks/builder:base from our pack CLI command? Let's try this locally on our workstation before commiting to much noise into our GitLab. Choose a project you want to build (I created a example Spring Boot app if you'd like at gitlab.com/jonashackt/microservice-api-spring-boot you can clone) and run:
docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash
Now inside the paketobuildpacks/builder image powered container try to run the Paketo lifecycle directly with:
/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest
I only used the -app parameter of the many possible parameters for the creator command, since most of them have quite good defaults. But as the default app directory path is not the default /workspace - but the current directory, I configured it. Also we need to define an <image-name> at the end, which will simply be used as the resulting container image name.
The first .gitlab-ci.yml
Both commands did work at my local workstation, so let's finally create a .gitlab-ci.yml using this approach (here's a fully working example .gitlab-ci.yml):
image: paketobuildpacks/builder
stages:
- build
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
docker login without docker
As we don't have docker available inside our Kubernetes Runners, we can't login into GitLab Container Registry as described in the docs. So the following error occured to me using this first approach:
===> ANALYZING
ERROR: failed to get previous image: connect to repo store "gitlab.yourcompanyhere.cloud:4567/yourgroup/microservice-api-spring-boot:latest": GET https://gitlab.yourcompanyhere.cloud/jwt/auth?scope=repository%3Ayourgroup%2Fmicroservice-api-spring-boot%3Apull&service=container_registry: DENIED: access forbidden
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Using the approach described in this so answer fixed the problem. We need to create a ~/.docker/config.json containing the GitLab Container Registry login information - and then the Paketo build will pick them up, as stated in the docs:
If CNB_REGISTRY_AUTH is unset and a docker config.json file is
present, the lifecycle SHOULD use the contents of this file to
authenticate with any matching registry.
Inside our .gitlab-ci.yml this could look like:
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
Our final .gitlab-ci.yml
As we're using the image: paketobuildpacks/builder at the top of our .gitlab-ci.yml, we can now leverage the lifecycle directly. Which is what we wanted to do in the first place. Only remember to use the correct GitLab CI variables to describe your <image-name> like this:
/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Otherwise the Buildpack process analyser step will break and it finally won't get pushed to the GitLab Container Registry. So finally our .gitlab-ci.yml looks like this (here's the fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Our builds should now run successfully using Paketo/Buildpacks without pack CLI and Docker:
See the full log of the example project here.
I'm just getting started with docker and continuous integration with Gitlab. I've added the following gitlab-ci.yml file to the root of my repository:
# Official docker image
image: docker:latest
services:
- docker:dind
build-dev:
stage: build
script:
- docker build -t obikerui/project -f app/Dockerfile.dev ./app
test:
stage: test
script:
- docker run obikerui/project npm run test -- --coverage
The build-dev stage runs and passes but the test stage fails with the following error message:
$ docker run obikerui/project npm run test -- --coverage
Unable to find image 'obikerui/project:latest' locally
docker: Error response from daemon: pull access denied for obikerui/project, repository does not exist or may require 'docker login'.
See 'docker run --help'.
ERROR: Job failed: exit code 125
Can anyone explain what's going wrong and suggest a fix? The repository is private, so do I need to provide some extra configuration to accommodate this?
Each job runs in a different container. You build and you tag your image correctly but that stays in that container.
For the test job a new container starts and that one does not have the image build by the previous job.
You should push your image to a registry (after you tag it accordingly) and then the test job should use the image from the repository.
You can use a public registry like the one offered by Docker or you can run a local container based on the image registry:2 provided by docker. In this case you have to make sure that the domain name pointing to the registry is available on your network (it can be an nginx with reverse proxy)