circleci permission denied error when using machine executor - circleci

I would like to use the machine executor so that I can run some component tests with docker-compose. My workflow fails on the checkout step and throws this error: Making checkout directory "/opt/my-app" Error: mkdir /opt/my-app: permission denied
Here is the yaml for the component_test stage in my workflow:
component_test:
machine: true
working_directory: /opt/my-app
steps:
- checkout
If I use docker instead of the machine executor then I don't get any permission issues:
component_test:
machine: true
working_directory: /opt/my-app
steps:
- checkout
But, I'd like to be able to use docker-compose and thus need to be able to run the machine executor. Has anyone seen a permission issue like this before?

You need to either change the working directory into something in /home/circleci or just exclude it complete as it's optional.
Right now, the circleci user runs the checkout step, which doesn't have permission to git clone to the working directory you choose.
Also, I wouldn't use machine: true as that is deprecated. Specify an image: https://circleci.com/docs/2.0/configuration-reference/#available-machine-images

Related

GitLabCI Kaniko on shared runner "error checking push permissions -- make sure you entered the correct tag name"

This similar question is not applicable because I am not using Kubernetes or my own registered runner.
I am attempting to build a Ruby-based image in my GitLabCI pipeline in order to have my gems pre-installed for use by subsequent pipeline stages. In order to build this image, I am attempting to use Kaniko in a job that runs in the .pre stage.
build_custom_dockerfile:
stage: .pre
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
IMAGE_TAG: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}
script:
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"username\":\"${CI_REGISTRY_USER}\",\"password\":\"${CI_REGISTRY_PASSWORD}\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/dockerfiles/custom/Dockerfile --destination \
${CI_REGISTRY_IMAGE}:${IMAGE_TAG}
This is of course based on the official GitLabCI Kaniko documentation.
However, when I run my pipeline, this job returns an error with the following message:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: registries must be valid RFC 3986 URI authorities: registry.gitlab.com
The Dockerfile path is correct and through testing with invalid Dockerfile paths to the --dockerfile argument, it is clear to me this is not the source of the issue.
As far as I can tell, I am using the correct pipeline environment variables for authentication and following the documentation for using Kaniko verbatim. I am running my pipeline jobs with GitLab's shared runners.
According to this issue comment from May, others were experiencing a similar issue which was then resolved when reverting to the debug-v0.16.0 Kaniko image. Likewise, I changed the Image name line to name: gcr.io/kaniko-project/executor:debug-v0.16.0 but this resulted in the same error message.
Finally, I tried creating a generic user to access the registry, using a deployment key as indicated here. Via the GitLabCI environment variables project settings interface, I added two variables corresponding to the username and key, and substituted these variables in my pipeline script. This resulted in the same error message.
I tried several variations on this approach, including renaming these custom variables to "CI_REGISTRY_USER" and "CI_REGISTRY_PASSWORD" (the predefined variables). I also made sure neither of these variables was marked as "protected". None of this solved the problem.
I have also tried running the tutorial script verbatim (without custom image tag), and this too results in the same error message.
Has anyone had any recent success in using Kaniko to build Docker images in their GitLabCI pipelines? It appears others are experiencing similar problems but as far as I can tell, no solutions have been put forward and I am not certain whether the issue is on my end. Please let me know if any additional information would be useful to diagnose potential problem sources. Thanks all!
I ran into this issue before many times forgetting that the variable was set to protected thus will only be exported to protected branches.
Hey i got it working but it was quite a hassle to find out.
The credentials i had to use were my git username and password not the registry user/passwordd!
Here is what my gitlab-ci.yml looks like (of course you would need to replace everything with variables but i was too lazy to do it until now)
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- k8s
script:
- echo "{\"auths\":{\"registry.mydomain.de/myusername/mytag\":{\"username\":\"myGitusername\",\"password\":\"myGitpassword\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination registry.mydoamin.de/myusername/mytag:$CI_COMMIT_SHORT_SHA

Mount repo into docker image when running yaml-pipeline in Azure DevOps

I am running a docker image in Azure Devops yaml-pipeline using a container step. However, I have problems mounting the content of the repo so that this is accessible from inside the docker image.
The Azure Devops pipeline.yml file is as follows:
container:
image: 'image-name'
endpoint: 'foo'
options: '-v $(Build.SourcesDirectory):/testing'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
This fails with the error message:
Error response from daemon: create $(Build.SourcesDirectory): "$(Build.SourcesDirectory)" includes
invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended
to pass a host directory, use absolute path
I also tried replacing $(..)$ with $[..] (see here but this results in the same error. Also with ${{..}} the pipeline will not even start (error: "A template expression is not allowed in this context" in the UI)
If I remove options the script runs, but the repo is not mounted.
For non-yaml pipelines, the question was addressed here.
Any ideas how to accomplish this? Or do I need to create a new docker image where the repo files have been add:ed?
Any ideas how to accomplish this? Or do I need to create a new docker
image where the repo files have been add:ed?
When specifying the Container using Yaml Schema directly, the Azure DevOps Service will call an extra Initialize containers task automatically before checkout source repo task and your real tasks.
container:
image: 'image-name'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script inside docker image'
During the Initialize containers task, the predefined variables like Agent.BuildDirectory, Build.Repository.LocalPath and Build.SourcesDirectory are not expanded (non-defined variables).
So you can't use the Build.SourcesDirectory in this way cause the value of this variable is expanded after the Initialize containers task.
1.About why the link you shared above can work: It's in a docker task/step so it can recognize the $(Build.SourcesDirectory) variable. (The real build tasks run after the build variables are defined)
2.If you're using specific Micorosft-hosted agent, you can try hard-coding the path. You can check this similar issue.
Usually for windows-hosted agent: $(Build.SourcesDirectory)=> D:\a\1\s
For Ubuntu-hosted agent: $(Build.SourcesDirectory)=> /home/vsts/work/1/s.

Azure DevOps Server (onprem) - container job - checkout not working

I'm trying to run my build inside a container with azure-pipelines in Azure DevOps Server(onprem). Following the official guide https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops-2019
I do have a self-hosted linux agent with ubuntu18.04 installed.
My azure-pipelines.yml
pool: linux-container-build
container: ubuntu:16.04
steps:
- script: whoami
The container initialization works fine and creates the container properly. Afterwards the checkout steps fails without much information.
Picture of pipeline: pipeline
Checkout step just does this:
##[section]Starting: Checkout ***** to s
==============================================================================
Task : Get sources
Description : Get sources from a repository. Supports Git, TfsVC, and SVN repositories.
Version : 1.0.0
Author : Microsoft
Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=798199)
==============================================================================
##[error]Collection was modified; enumeration operation may not execute.
##[section]Finishing: Checkout **** to s
I updated my task definition to:
- checkout: none
This will skip the checkout step and the 'whoami' step succeeds with proper output inside the container
It seems I need git inside my container? ..also probably all other packages..
Can I somehow add git and all required applications to the _work folder or to externels because this will get mounted in the docker volume?

Jenkins user not in passwd on dynamic jnlp slave in kubernetes

I am building a system to do c++ cmake builds primarily. I have Jenkins firing the dynamic pods, firing off shell scripts, etc. But, I can't get it to checkout the code. Now, my Jenkinsfile launches a container that the actual compile is supposed to be run in. That "sub" container is tuned to compile C++ code. Now, I have jenkins running scripts and such in that pod, but, when i try
checkout scm
im getting errors saying
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --force --progress git#gitlab.com:mystuff/hello-world-cmake.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: No user exists for uid 1000080000
fatal: Could not read from remote repository.
my home folder is the standard /home/jenkins and the workspace folder is there, etc, etc. But, when I dump the /etc/passwd file, the jenkins user isn't listed in it.
Whats the appropriate way to add the jenkins user to that file?
What image are you using for Jenkins slave? Does it have user jenkins? If it has you need to specify this in your spec for Jenkins slave:
spec:
securityContext:
runAsUser: 1000
UPDATE:
You cannot run default Jenkins image in Openshift, because Openshift runs containers as random user. You should run Jenkins from builtin Jenkins template "Jenkins Persistent". If you don't have this template and don't have Jenkins image stream - you can try to use image openshift/jenkins-2-centos7. See details at:
https://github.com/openshift/jenkins/issues/168
https://github.com/openshift/jenkins

Jenkins Pipeline gcloud problems in docker

I'm trying to set up jenkins pipeline according to
this article but instead use google container registry to push the docker images to.
The Problem: The part which fails me is this jenkinsfile stage block
stage ('Push Docker Image To Container Registry') {
docker.image('google/cloud-sdk:alpine').inside {
sh "echo ${env.GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
The Error:
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.components.update) Could not create directory [/.config/gcloud]: Permission denied.
I can't run any command to do with gcloud as the error above is what i get all the time.
I tried create the "/.config" directory manually logged into the aws instance and open up the permission of the folder to everyone but that didn't help either.
I also can't find anywhere how to properly setup google cloud for jenkins pipeline using docker.
Any suggestions are greatly appreciated :)
It looks like it's trying to write data directly into your root file system directory.
The .config directory for gcloud would normally be in the following locations for username and/or root user:
/home/yourusername/.config/gcloud
/root/.config/gcloud
It looks like, for some reason, jenkins thinks the parent directory should be in /.
I would try checking where your cloud sdk config directories are on the machine you are running this on (and for the user the scripts runs as):
$ sudo find / -iname "gcloud"
And look for location similars to those printed above.
Could it be that the Cloud SDK is installed in a none standard location on the machine?

Resources