Configuring a mesos slave to use amazon ECR gives the following error when it receives a job:
Unsupported auth-scheme: BASIC
Does this look familiar to anyone ? I'm running the slave pointed to my user's docker config json file, which i updated by issuing a docker login beforehand.
Turns out it was my bad. I didn't modify the application that started this job to change the docker image name to include the registry as well.
Related
First, I'm noob with Continuous Deployement. I currently have a VPS running 3 docker containers (Flask, MongoDb, Nginx) that I'm pulling from DockerHub with a docker-compose. What I want to do is auto deploy those 3 containers when pushing some code in my github repo. I think It's possible with Ansible but I never used it.
Someone can explain me how to do it ?
Many thx !
Finally I will use Jenkins :)
That implies a webhook, as explained in "How to Integrate Your GitHub Repository to Your Jenkins Project" by Guy Salton
And that means your Jenkins server is accessible through an internet-facing public URL, which is not always obvious when working in a corporate environment.
GitHub Actions "Publishing Docker images" can help publishing the image to DockerHub, but you still need to listen/detect those events in order for your Jenkins to trigger job pulling said ppublished images.
For that, a regular sheduler Jenkins job using regclient/regclient can help checking the latest published SHA2 image ID has or has not changed.
See more with "Container Registry Management with Brandon Mitchell: DevOps and Docker (Ep 108)".
I have a gke cluster with a running jenkins master. I am trying to start a build. I am using a pipeline with a slave configured by the kubernetes plugin (pod Templates). I have a custom image for my jenkins slave published in gcr (private access). I have added credentials (google service account) for my gcr to jenkins. Nevertheless jenkins/kubernetes is failing to start-up a slave because the image can't be pulled from gcr. When I use public images (jnlp) there is no issue.
But when I try to use the image from gcr, kubernetes says:
Failed to pull image "eu.gcr.io/<project-id>/<image name>:<tag>": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Although the pod is running in the same project as the gcr.
I would expect jenkins to start the slave even if I use a image from gcr.
Even if the pod is running in a cluster in the same project, is not authenticated by default.
Is stated that you've already set up the Service Account and I assume that there's a furnished key in the Jenkins server.
If you're using the Google OAuth Credentials Plugin you can then also use the Google Container Registry Auth Plugin to authenticate to a private GCR repository and pull the image.
I'm new to CI/CD process.
We have a model deploying a spring boot application through jenkins in docker in a same machine.
We was searching in internet how to deploy an application to another server, the only key which we have got is through SSH agent. I hope SSH is only for communicating.
Can we have a complete example how to deploy into another server and what are the other preventive measure to be taken into account.
Kindly guide us
In your Jenkins pipeline you need to define a stage for publishing the docker image and in your infrastructure you need a repository that stores your artifacts and docker images.
Repositories I know are Nexus or JFrog Artifactory.
So your server1, at the end of the pipeline, will upload the stable docker image to Nexus.
To execute the docker images in another server (not using an orchestrator) you may use Ansible.
On the net you can find a lot of sources, for example: https://www.codementor.io/mamytianarakotomalala/how-to-deploy-docker-container-with-ansible-on-debian-8-mavm48kw0
I'm trying to get the Docker Build and Publish plugin working on our Jenkins instance. The Docker image is getting built correctly, but I'm having issues with getting this image pushed to our Artifactory Docker Repository.
The Artifactory repository is hosted at https://instance.company.com/artifactory/test-docker-build
When I look in the logs for the build, it fails to upload the Docker image, but the url looks like https://instance.company.com/test-docker-build. Here is the output from the log:
[workspace] $ docker push instance.company.com/test-docker-build:test
The push refers to a repository [instance.company.com/test-docker-build] (len: 1)
Sending image list
Error: Status 405 trying to push repository test-docker-build: "<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n<html><head>\n<title>405 Method Not Allowed</title>\n</head><body>\n<h1>Method Not Allowed</h1>\n<p>The requested method PUT is not allowed for the URL /v1/repositories/test-docker-build/.</p>\n</body></html>\n"
Build step 'Docker Build and Publish' marked build as failure
Finished: FAILURE
I am assuming that it is failing because the repository URL is incorrect. I have also tried logging into the backend and pushing via the command line with the correct repository URL and that doesn't seem to work either.
My main question is: does docker not like the URL structure since it uses the '/' to denote user/image name? I.E. would this work if the url didn't include the /artifactory?
Any ideas would be greatly appreciated!
The response you're getting back looks like something a reverse proxy would return (nginx\apache), not Artifactory - did you follow the instructions on how to set up a Reverse Proxy with Artifactory?
Usually you would need to:
set up the reverse proxy for docker usage correctly
either get a certificate for company.com and use it in the proxy or set up Artifactory as an insecure registry (beware of this - should not be used in production)
Once you did these you will reference the Artifactory instance as instance.company.com no need for /artifactory or anything else.
The url you posted is what the reverse proxy should use when directing to Artifactory (i.e. instance.company.com/artifactory/api/docker/test-docker-build/v2)
Then usage would look like docker push instance.company.com/myImage:myTag
I'm trying to run build Docker task to create a docker image. I set up a docker host, I'm using defautl Docker Hub as registry and my whole environment is on Azure.
When I queue a build task it fails at Task Docker.
Log output:
check path : null
task result: Failed
Not found docker: null
Finishing task: Docker
[error]Task Docker failed. This caused the job to fail. Look at the logs for the task for more details.
Does someone have any thought on what may be happening?
After looking into this, it would seem this happens if Docker is not properly installed on the build agent for the service principal the agent is running under.
Keep in mind that:
The Build must be run in a private agent, as the hosted ones do not yet have Docker installed, as per a very small footnote in the bottom of the documentation.
The VSTS agent must be running with a principal that has the environment variables set for docker to run; the default is LocalService account, which won't have that installed. This turns out to be a problem with other stuff as well and I've found it best to have a special user principal to run the agent under, that can also log into the system.
Fixing these two issues made it work for me.
I was able to switch the agent to Hosted VS2017 which has Docker support.
If Linux is an option, try Hosted Linux Preview