Mount repo into docker image when running yaml-pipeline in Azure DevOps - docker

I am running a docker image in Azure Devops yaml-pipeline using a container step. However, I have problems mounting the content of the repo so that this is accessible from inside the docker image.
The Azure Devops pipeline.yml file is as follows:
container:
image: 'image-name'
endpoint: 'foo'
options: '-v $(Build.SourcesDirectory):/testing'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
This fails with the error message:
Error response from daemon: create $(Build.SourcesDirectory): "$(Build.SourcesDirectory)" includes
invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended
to pass a host directory, use absolute path
I also tried replacing $(..)$ with $[..] (see here but this results in the same error. Also with ${{..}} the pipeline will not even start (error: "A template expression is not allowed in this context" in the UI)
If I remove options the script runs, but the repo is not mounted.
For non-yaml pipelines, the question was addressed here.
Any ideas how to accomplish this? Or do I need to create a new docker image where the repo files have been add:ed?

Any ideas how to accomplish this? Or do I need to create a new docker
image where the repo files have been add:ed?
When specifying the Container using Yaml Schema directly, the Azure DevOps Service will call an extra Initialize containers task automatically before checkout source repo task and your real tasks.
container:
image: 'image-name'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
During the Initialize containers task, the predefined variables like Agent.BuildDirectory, Build.Repository.LocalPath and Build.SourcesDirectory are not expanded (non-defined variables).
So you can't use the Build.SourcesDirectory in this way cause the value of this variable is expanded after the Initialize containers task.
1.About why the link you shared above can work: It's in a docker task/step so it can recognize the $(Build.SourcesDirectory) variable. (The real build tasks run after the build variables are defined)
2.If you're using specific Micorosft-hosted agent, you can try hard-coding the path. You can check this similar issue.
Usually for windows-hosted agent: $(Build.SourcesDirectory)=> D:\a\1\s
For Ubuntu-hosted agent: $(Build.SourcesDirectory)=> /home/vsts/work/1/s.

Related

GitLabCI Kaniko on shared runner "error checking push permissions -- make sure you entered the correct tag name"

This similar question is not applicable because I am not using Kubernetes or my own registered runner.
I am attempting to build a Ruby-based image in my GitLabCI pipeline in order to have my gems pre-installed for use by subsequent pipeline stages. In order to build this image, I am attempting to use Kaniko in a job that runs in the .pre stage.
build_custom_dockerfile:
stage: .pre
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
IMAGE_TAG: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}
script:
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"username\":\"${CI_REGISTRY_USER}\",\"password\":\"${CI_REGISTRY_PASSWORD}\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/dockerfiles/custom/Dockerfile --destination \
${CI_REGISTRY_IMAGE}:${IMAGE_TAG}
This is of course based on the official GitLabCI Kaniko documentation.
However, when I run my pipeline, this job returns an error with the following message:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: registries must be valid RFC 3986 URI authorities: registry.gitlab.com
The Dockerfile path is correct and through testing with invalid Dockerfile paths to the --dockerfile argument, it is clear to me this is not the source of the issue.
As far as I can tell, I am using the correct pipeline environment variables for authentication and following the documentation for using Kaniko verbatim. I am running my pipeline jobs with GitLab's shared runners.
According to this issue comment from May, others were experiencing a similar issue which was then resolved when reverting to the debug-v0.16.0 Kaniko image. Likewise, I changed the Image name line to name: gcr.io/kaniko-project/executor:debug-v0.16.0 but this resulted in the same error message.
Finally, I tried creating a generic user to access the registry, using a deployment key as indicated here. Via the GitLabCI environment variables project settings interface, I added two variables corresponding to the username and key, and substituted these variables in my pipeline script. This resulted in the same error message.
I tried several variations on this approach, including renaming these custom variables to "CI_REGISTRY_USER" and "CI_REGISTRY_PASSWORD" (the predefined variables). I also made sure neither of these variables was marked as "protected". None of this solved the problem.
I have also tried running the tutorial script verbatim (without custom image tag), and this too results in the same error message.
Has anyone had any recent success in using Kaniko to build Docker images in their GitLabCI pipelines? It appears others are experiencing similar problems but as far as I can tell, no solutions have been put forward and I am not certain whether the issue is on my end. Please let me know if any additional information would be useful to diagnose potential problem sources. Thanks all!
I ran into this issue before many times forgetting that the variable was set to protected thus will only be exported to protected branches.
Hey i got it working but it was quite a hassle to find out.
The credentials i had to use were my git username and password not the registry user/passwordd!
Here is what my gitlab-ci.yml looks like (of course you would need to replace everything with variables but i was too lazy to do it until now)
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- k8s
script:
- echo "{\"auths\":{\"registry.mydomain.de/myusername/mytag\":{\"username\":\"myGitusername\",\"password\":\"myGitpassword\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination registry.mydoamin.de/myusername/mytag:$CI_COMMIT_SHORT_SHA

Use repository name in travis CI

I am facing an issue on Travis CI. I seem not to be able to use the repository name as an environment variable.
I have a Travis build that is setup to build, tag and push a Docker image.
In the after_success section of the .travis.yml file the following command is running
docker build -t ${PROJECT}:${TRAVIS_BRANCH} .
(The environment variable ${PROJECT} is the actual name of the repository and is set in the repository settings.)
The problem is that Docker is using "[secure]" as image name instead of the repository name. I end up with something like
Successfully tagged [secure]:staging
After that when I am tagging the image the following error is returned
Error parsing reference: "/[secure]:staging" is not a valid repository/tag: invalid reference format
I have tried to update ${PROJECT} to a random string and it worked fine.
Is there a way to use the repository name as a environment variable?
It seems as if your $PROJECT contains a leading slash which might be the problem.

Creating Docker image and running as service in Jenkins

I have a JSP website. I am building DevOps pipeline. I am looking for help to integrate Jenkins with the Docker.
I already have docker file which does task of Deploying war file to the tomcat server.
(Command1)
Through the command line I can run the docker file and create an image.
I can run created image as a service and able to browse the website.
(Command2)
I want to do these two steps in Jenkins. I need your help to integrate these two commands in Jenkins, so that I need not to run these two commands manually one after other.
I think that you can use the "Docker Pipeline Plugin" for that.
For the first command, you can have a stage that runs:
myImage = docker.build("my-image:my-tag")
If you need you can have another stage where you can run some tests inside the image with:
myImage.inside {
sh './run-test.sh'
}
Finally, you can push the image to the repository to your repository with:
docker.withRegistry('https://your-registry.com', 'credentials_id') { //use a second parameter if you repository requires authentication
myImage.push('new_tag') //You can push it with a new tag
}
Please note that if you wanna use the docker.* methods in a declarative pipeline you must do it inside a script step or in a function.
(More info in the plugin's user guide)
For the second command, you only have to update the running image in the server. For doing that you have a lot of options (docker service update if you're using Docker Swarm, for example) and I think that part is outside of the scope of this post.

jenkins pipeline DOCKER_HOST

I need to run a docker-container inside my pipeline.
My problem is, there is no docker.sock available inside the Jenkins-container. And actual no chance to get it.
But I found some jobs using docker with this Option:
"Inject environment variables to the build process" -> "Properties
Content"
And following configured:
DOCKER_HOST=tcp://<ip>:<port>
DOCKER_CERT_PATH=/var/jenkins_home/certs
In my understanding, this is equivalent to the docker.sock and useable as plugin, isnt it?
But how can i use it inside a (multi-)pipeline project?
I've tried using this Block inside my Note:
environment {
DOCKER_HOST = 'tcp://<ip>:<port>'
DOCKER_CERT_PATH = '/var/jenkins_home/certs'
}
But got same issue: "docker: not found"
I might have a logical fallacy. Hope someone could help.
Otherwise is it possible to create a jenkins-slave including a docker.sock?
But got same issue: "docker: not found"
This indicates that your Jenkins slave, the one running the pipeline script, does not have the docker command line tools. This depends on your distribution, but in my case I fixed it by changing my build-slave/pipeline-runner creation steps to include:
yum install -y docker-client
Note that you'll still need that for the Cloudbees docker plugin (the thing which provides stuff like docker.build() and docker.image()) because it translates those nice pipeline directives down into shell commands.

Jenkinsfile with writeFile inside docker container?

If I have a Jenkinsfile with
docker.image('foo').inside {
writeFile file: bar, text: baz
}
the file gets written to the Jenkins agent's workspace, not inside the container, right? Is there a sane way to write a file inside a container?
The closest thing I was able to find on the web is https://issues.jenkins-ci.org/browse/JENKINS-33510. I suppose I could use a sh step with a here-doc, but that sounds pretty ugly.
There is a workspace in the container and you can write files inside the container with writeFiles.
But it's also shared with the workspace of the host node because it's mounted as a volume.
inside will:
Automatically grab a slave and a workspace (no extra node block is
required).
Pull the requested image to the Docker server (if not already cached).
Start a container running that image.
Mount the Jenkins workspace as a “volume” inside the container, using
the same file path.
You can see more detail what happens in inside here: https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/chapter-docker-workflow.html

Resources