Use repository name in travis CI - docker

I am facing an issue on Travis CI. I seem not to be able to use the repository name as an environment variable.
I have a Travis build that is setup to build, tag and push a Docker image.
In the after_success section of the .travis.yml file the following command is running
docker build -t ${PROJECT}:${TRAVIS_BRANCH} .
(The environment variable ${PROJECT} is the actual name of the repository and is set in the repository settings.)
The problem is that Docker is using "[secure]" as image name instead of the repository name. I end up with something like
Successfully tagged [secure]:staging
After that when I am tagging the image the following error is returned
Error parsing reference: "/[secure]:staging" is not a valid repository/tag: invalid reference format
I have tried to update ${PROJECT} to a random string and it worked fine.
Is there a way to use the repository name as a environment variable?

It seems as if your $PROJECT contains a leading slash which might be the problem.

Related

Using latest Docker image in BitBucket pipe command

I have a Bitbucket pipeline that runs a pipe using a version number tag as follows:
script:
- mkdir meta
- pipe: myteam/bladepackager-pipeline:1.0.8
variables: ...
I would prefer to have it automatically resolve the latest tagged version of the Docker image, so I tried:
script:
- mkdir meta
- pipe: myteam/bladepackager-pipeline:latest
variables: ...
But I get an error message from my BitBucket pipeline run that says
Your pipe name is in an invalid format. Check the name of the pipe and try again.
Is there a way to specify latest rather than a specific tag?
The tag latest, itself is a tag, it does not mean the latest tag. so if you want to use images with this tag, you have to make docker with that tag.
The
- pipe: aaa/bbbb:1.2.3
syntax refers to a git repository hosted in the bitbucket, e.g. bitbucket.org/aaa/bbbb, whereas the
- pipe: docker://registry.example.com/aaa/bbbb:tag
refers to a docker image in any registry.
The :latest tag can only be used with the docker syntax. For the bare pipe syntax I guess you can only try git refs? Maybe :main or :master would be valid? Never managed it to work, please reach back if you succeed.

GitLabCI Kaniko on shared runner "error checking push permissions -- make sure you entered the correct tag name"

This similar question is not applicable because I am not using Kubernetes or my own registered runner.
I am attempting to build a Ruby-based image in my GitLabCI pipeline in order to have my gems pre-installed for use by subsequent pipeline stages. In order to build this image, I am attempting to use Kaniko in a job that runs in the .pre stage.
build_custom_dockerfile:
stage: .pre
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
IMAGE_TAG: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}
script:
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"username\":\"${CI_REGISTRY_USER}\",\"password\":\"${CI_REGISTRY_PASSWORD}\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/dockerfiles/custom/Dockerfile --destination \
${CI_REGISTRY_IMAGE}:${IMAGE_TAG}
This is of course based on the official GitLabCI Kaniko documentation.
However, when I run my pipeline, this job returns an error with the following message:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: registries must be valid RFC 3986 URI authorities: registry.gitlab.com
The Dockerfile path is correct and through testing with invalid Dockerfile paths to the --dockerfile argument, it is clear to me this is not the source of the issue.
As far as I can tell, I am using the correct pipeline environment variables for authentication and following the documentation for using Kaniko verbatim. I am running my pipeline jobs with GitLab's shared runners.
According to this issue comment from May, others were experiencing a similar issue which was then resolved when reverting to the debug-v0.16.0 Kaniko image. Likewise, I changed the Image name line to name: gcr.io/kaniko-project/executor:debug-v0.16.0 but this resulted in the same error message.
Finally, I tried creating a generic user to access the registry, using a deployment key as indicated here. Via the GitLabCI environment variables project settings interface, I added two variables corresponding to the username and key, and substituted these variables in my pipeline script. This resulted in the same error message.
I tried several variations on this approach, including renaming these custom variables to "CI_REGISTRY_USER" and "CI_REGISTRY_PASSWORD" (the predefined variables). I also made sure neither of these variables was marked as "protected". None of this solved the problem.
I have also tried running the tutorial script verbatim (without custom image tag), and this too results in the same error message.
Has anyone had any recent success in using Kaniko to build Docker images in their GitLabCI pipelines? It appears others are experiencing similar problems but as far as I can tell, no solutions have been put forward and I am not certain whether the issue is on my end. Please let me know if any additional information would be useful to diagnose potential problem sources. Thanks all!
I ran into this issue before many times forgetting that the variable was set to protected thus will only be exported to protected branches.
Hey i got it working but it was quite a hassle to find out.
The credentials i had to use were my git username and password not the registry user/passwordd!
Here is what my gitlab-ci.yml looks like (of course you would need to replace everything with variables but i was too lazy to do it until now)
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- k8s
script:
- echo "{\"auths\":{\"registry.mydomain.de/myusername/mytag\":{\"username\":\"myGitusername\",\"password\":\"myGitpassword\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination registry.mydoamin.de/myusername/mytag:$CI_COMMIT_SHORT_SHA

Mount repo into docker image when running yaml-pipeline in Azure DevOps

I am running a docker image in Azure Devops yaml-pipeline using a container step. However, I have problems mounting the content of the repo so that this is accessible from inside the docker image.
The Azure Devops pipeline.yml file is as follows:
container:
image: 'image-name'
endpoint: 'foo'
options: '-v $(Build.SourcesDirectory):/testing'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
This fails with the error message:
Error response from daemon: create $(Build.SourcesDirectory): "$(Build.SourcesDirectory)" includes
invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended
to pass a host directory, use absolute path
I also tried replacing $(..)$ with $[..] (see here but this results in the same error. Also with ${{..}} the pipeline will not even start (error: "A template expression is not allowed in this context" in the UI)
If I remove options the script runs, but the repo is not mounted.
For non-yaml pipelines, the question was addressed here.
Any ideas how to accomplish this? Or do I need to create a new docker image where the repo files have been add:ed?
Any ideas how to accomplish this? Or do I need to create a new docker
image where the repo files have been add:ed?
When specifying the Container using Yaml Schema directly, the Azure DevOps Service will call an extra Initialize containers task automatically before checkout source repo task and your real tasks.
container:
image: 'image-name'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
During the Initialize containers task, the predefined variables like Agent.BuildDirectory, Build.Repository.LocalPath and Build.SourcesDirectory are not expanded (non-defined variables).
So you can't use the Build.SourcesDirectory in this way cause the value of this variable is expanded after the Initialize containers task.
1.About why the link you shared above can work: It's in a docker task/step so it can recognize the $(Build.SourcesDirectory) variable. (The real build tasks run after the build variables are defined)
2.If you're using specific Micorosft-hosted agent, you can try hard-coding the path. You can check this similar issue.
Usually for windows-hosted agent: $(Build.SourcesDirectory)=> D:\a\1\s
For Ubuntu-hosted agent: $(Build.SourcesDirectory)=> /home/vsts/work/1/s.

jenkins pipeline DOCKER_HOST

I need to run a docker-container inside my pipeline.
My problem is, there is no docker.sock available inside the Jenkins-container. And actual no chance to get it.
But I found some jobs using docker with this Option:
"Inject environment variables to the build process" -> "Properties
Content"
And following configured:
DOCKER_HOST=tcp://<ip>:<port>
DOCKER_CERT_PATH=/var/jenkins_home/certs
In my understanding, this is equivalent to the docker.sock and useable as plugin, isnt it?
But how can i use it inside a (multi-)pipeline project?
I've tried using this Block inside my Note:
environment {
DOCKER_HOST = 'tcp://<ip>:<port>'
DOCKER_CERT_PATH = '/var/jenkins_home/certs'
}
But got same issue: "docker: not found"
I might have a logical fallacy. Hope someone could help.
Otherwise is it possible to create a jenkins-slave including a docker.sock?
But got same issue: "docker: not found"
This indicates that your Jenkins slave, the one running the pipeline script, does not have the docker command line tools. This depends on your distribution, but in my case I fixed it by changing my build-slave/pipeline-runner creation steps to include:
yum install -y docker-client
Note that you'll still need that for the Cloudbees docker plugin (the thing which provides stuff like docker.build() and docker.image()) because it translates those nice pipeline directives down into shell commands.

How to get job variables injected into the docker execution?

I wonder if this is already part of the system...
I need to use the current gitlab user id and email ($GITLAB_USER_ID, $GITLAB_USER_EMAIL) injected into the execution of the docker image (to later configure the git repository).
Is there a magic way to do this ? or should I explicitly write the export commands into my .gitlab-ci.yml file (as a before_script for example) ?
Thanks.
I got my response by trying the env command on a build.
So yes every job variables are available into the docker execution env.

Resources