I want to push my artifact from jenkins to nexus but I am getting this error
Error response from daemon: login attempt to http:142.93.XXX.XXX:8081/v2/ failed with status: 404 Not Found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
My Docker commands
docker build -t 142.93.XXX.XX:8081/java-maven-app:1.0 .
echo $PASSWORD | docker login -u $USERNAME --password-stdin 142.93.XXX.XX:8081
docker push 142.93.XXX.XX:8081/java-maven-app:1.0
Everything is correct except the v2 folder that is not on the repository. I was thinking that should be created automatically. How do I solve this issue and I cant create the folder manually.
Related
sshpass -p Naveen#1604 scp -r /var/lib/jenkins/workspace/fastpostclose stellar#192.168.25.154:/var/www/html/fastpostclose
Build step 'Execute shell' marked build as failure
Finished: FAILURE
im using this command for moving files from jenkins to ubuntu server but it is showing build failure
Running Gitlab version 14.8.2, same version for the Runner, which is a simple shell.
This is my yaml ci file:
variables:
REPOSITORY: $CI_REGISTRY/acme/test/test-acme/master
before_script:
- export PATH=$PATH:/usr/local/go/bin
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build_image:
script:
- echo -e "machine gitlab.acme.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > $HOME/.netrc
- git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.acme.com".insteadOf git#gitlab.acme.com
- go mod download
- go build
- docker build -f Dockerfile.A4B -t $REPOSITORY:latest .
- docker push $REPOSITORY:latest
This is the output:
Running with gitlab-runner 14.8.2 (c6e7e194)
on gitlab-runner-4 QxNeqEeQ
Preparing the "shell" executor 00:00
Using Shell executor...
Preparing environment 00:01
Running on gitlab-runner-4...
Getting source from Git repository 00:00
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /home/gitlab-runner/builds/QxNeqEeQ/0/acme/test/test-acme/.git/
Checking out aa26121e as master...
Removing test-acme
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:02
$ export PATH=$PATH:/usr/local/go/bin
$ docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/gitlab-runner/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ echo -e "machine gitlab.acme.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > $HOME/.netrc
$ git config --global url."https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.acme.com".insteadOf git#gitlab.acme.com
$ go mod download
$ go build
$ docker build -f Dockerfile.A4B -t $REPOSITORY:latest .
Step 1/8 : FROM registry.acme.com/acme/base/docker-go-runtime/master
Head "https://registry.acme.com/v2/acme/base/docker-go-runtime/master/manifests/latest": error parsing HTTP 403 response body: no error details found in HTTP response body: "{\"message\":\"access forbidden\",\"status\":\"error\",\"http_status\":403}"
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit status 1
I can log in with no problem but once I try to build the image it seems I'm not authorized. The Dockerfile.A4B is this:
FROM registry.acme.com/acme/base/docker-go-runtime/master
....
If I do a pull like this it works just fine:
docker pull registry.acme.com:5050/acme/test/test-zip/master
UPDATE
I noticed that if I change my Dockerfile.A4B with this:
FROM registry.acme.com:5050/acme/base/docker-go-runtime/master
Instead of this:
FROM registry.acme.com/acme/base/docker-go-runtime/master
basically adding the port 5050 at the end it works.
So I’m wondering something wrong with the repository configuration?
Funny thing is that if I create a deploy Token and I login doing this:
docker login registry.acme.com -u gitlab+deploy-token-2 -p password
And I have full rights read and write, but when then I try to do a docker build like this it fails:
docker build -f Dockerfile.A4B -t registry.acme.com/acme/test/test-zip/master:latest .
Sending build context to Docker daemon 24.47MB
Step 1/8 : FROM registry.acme.com/acme/base/docker-go-runtime/master
error parsing HTTP 404 response body: unexpected end of JSON input: “”
which is slightly different
I set up a codebuild for a python project with dependencies that takes to long to build. So I enabled artifact cache for docker layers. This works fine but only last for a short while and will invalidate cache for builds 15mins apart. Another solution I thought of was to pull the docker image on pre_build step but it doesn't seem to work. My buildspec:
version: 0.2
env:
secrets-manager:
DOCKERHUB_ID: arn:aws:secretsmanager:■■■■■■:■■■■■■:■■■■■■:■■■■■■/■■■■■■:■■■■■■
DOCKERHUB_TOKEN: arn:aws:secretsmanager:■■■■■■:■■■■■■:■■■■■■:■■■■■■/■■■■■■:■■■■■■
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- echo Logging in to Docker Hub...
- echo $DOCKERHUB_TOKEN | docker login -u $DOCKERHUB_ID --password-stdin
- docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME || true
build:
commands:
- echo Build started on `date`
- echo Building the Docker image on branch $CODEBUILD_WEBHOOK_HEAD_REF ...
- touch .env
- echo $ENV_PREFIX$IMAGE_REPO_NAME:$IMAGE_TAG
- docker build --cache-from $IMAGE_REPO_NAME:$IMAGE_TAG --build-arg BUILD_SECRET_KEY=$SECRET_KEY -t $IMAGE_REPO_NAME:$IMAGE_TAG -f docker/django/Dockerfile .
- docker tag $ENV_PREFIX$IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
- IMAGE_DIFINITION_APP="{\"name\":\"${CONTAINER_NAME}\",\"imageUri\":\"${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}:${IMAGE_TAG}\"}"
- echo "[${IMAGE_DIFINITION_APP}]" > imagedefinitions.json
artifacts:
files: imagedefinitions.json
I can successfully pull the image on pre_build but on the build step it gives me this error
#7 ERROR: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
The role I'm using already grants all privilege to ECR. Is there any other permission I'm missing?
Any help is greatly appreciated.
take a look here:
ECR polices
Maybe you did not add permission in ECR policy.
It took me a while to understand what went wrong but instead of:
- docker build --cache-from $IMAGE_REPO_NAME:$IMAGE_TAG --build-arg BUILD_SECRET_KEY=$SECRET_KEY -t $IMAGE_REPO_NAME:$IMAGE_TAG -f docker/django/Dockerfile .
The registry name should be added before the repo name otherwise it will search docker hub instead of ecr:
- docker build --cache-from $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME --build-arg BUILD_SECRET_KEY=$SECRET_KEY -t $IMAGE_REPO_NAME:$IMAGE_TAG -f docker/django/Dockerfile .
So I'm trying to setup my Gitlab CI to trigger a job on git push to build and deploy my Docker. This is the .gitlab-ci.yml file I'm using based on an example from Gitlab docs (Elixir yml).
stages:
- build
build:
before_script:
- docker build -f Dockerfile.build -t ci-project-build-$CI_PROJECT_ID:$CI_BUILD_REF .
- docker create
-v /build/deps
-v /build/_build
-v /build/rel
-v /root/.cache/aceapp/
--name build_data_$CI_PROJECT_ID_$CI_BUILD_REF busybox /bin/true
tags:
- docker
stage: build
script:
- docker run --volumes-from build_data_$CI_PROJECT_ID_$CI_BUILD_REF --rm -t ci-project-build-$CI_PROJECT_ID:$CI_BUILD_REF
The output when pushing to GitLab instance is this:
Running with gitlab-runner 10.7.2 (b5e03c94)
on my.host.rhel.runner 8f724ea7
Using Shell executor...
Running on my.host.local...
Fetching changes...
HEAD is now at 14351c4 Merge branch 'Development' into 'master'
From https://my.host.example/zalmosc/ace-app
14351c4..9fa2d43 master -> origin/master
Checking out 9fa2d435 as master...
Skipping Git submodules setup
$ # Auto DevOps variables and functions # collapsed multi-line command
$ setup_docker
$ build
Logging to GitLab Container Registry with CI credentials...
Login Succeeded
Building Dockerfile-based application...
invalid argument "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" for t: Error parsing reference: "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" is not a valid repository/tag: invalid reference format
See 'docker build --help'.
ERROR: Job failed: exit status 1
I understand the docker tag is not valid (is the before_script: really triggered based on the name?), and I'm looking for help regarding a) a solution b) how I can learn more about the requirements for a pipeline that builds docker based on default settings. Do I need to tag my docker image locally and then somehow add this to my git commit?
The thing is -t is to tag your Docker image. See the docs here.
The tag should be formated like name:version, and you giving it /master:9fa2d4358e6c426b882e2251aa5a49880013614b which is not a valid tag. You could try to delete the / before master
Your tag cannot begin with '/':
$ docker build -f Dockerfile.build -t /master:9fa2d4358e6c426b882e2251aa5a49880013614b .
invalid argument "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
# remove '/'
$ docker build -f Dockerfile.build -t master:9fa2d4358e6c426b882e2251aa5a49880013614b .
Sending build context to Docker daemon 3.584kB
Step 1/3 : FROM ubuntu:16.04
---> 14f60031763d
...
If you are not using the built in registry, you might have to set the CI_REGISTRY_IMAGE value to something. It seems that if you don't se this it gets set to /master and causes this error. you can set this in the CI setting page, or when making a new pipeline. e.g CI_REGISTRY_IMAGE gitlab.com/user/project
I am using Jenkins to 'execute shell' command
ls -l /mnt/ftpbackup/ftpuser/*
But getting error
ls: cannot open directory /mnt/ftpbackup/ftpuser/: Permission denied
I am able to run the very same command when I log as 'jenkins' user, see below:
-bash-4.1$ id
uid=493(jenkins) gid=490(jenkins) groups=490(jenkins),504(ftpuser) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.1$ ls -l /mnt/ftpbackup/ftpuser
total 48116044
....
are you trying to execute this command in the jenkins master through jenkins?
try whoami command to find out which user jenkins is using to execute commands
if you are executing in any node, the jenkins will connect to that node using the credentials you have provided in node settings, please check that.