I'm trying to get the Docker Build and Publish plugin working on our Jenkins instance. The Docker image is getting built correctly, but I'm having issues with getting this image pushed to our Artifactory Docker Repository.
The Artifactory repository is hosted at https://instance.company.com/artifactory/test-docker-build
When I look in the logs for the build, it fails to upload the Docker image, but the url looks like https://instance.company.com/test-docker-build. Here is the output from the log:
[workspace] $ docker push instance.company.com/test-docker-build:test
The push refers to a repository [instance.company.com/test-docker-build] (len: 1)
Sending image list
Error: Status 405 trying to push repository test-docker-build: "<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n<html><head>\n<title>405 Method Not Allowed</title>\n</head><body>\n<h1>Method Not Allowed</h1>\n<p>The requested method PUT is not allowed for the URL /v1/repositories/test-docker-build/.</p>\n</body></html>\n"
Build step 'Docker Build and Publish' marked build as failure
Finished: FAILURE
I am assuming that it is failing because the repository URL is incorrect. I have also tried logging into the backend and pushing via the command line with the correct repository URL and that doesn't seem to work either.
My main question is: does docker not like the URL structure since it uses the '/' to denote user/image name? I.E. would this work if the url didn't include the /artifactory?
Any ideas would be greatly appreciated!
The response you're getting back looks like something a reverse proxy would return (nginx\apache), not Artifactory - did you follow the instructions on how to set up a Reverse Proxy with Artifactory?
Usually you would need to:
set up the reverse proxy for docker usage correctly
either get a certificate for company.com and use it in the proxy or set up Artifactory as an insecure registry (beware of this - should not be used in production)
Once you did these you will reference the Artifactory instance as instance.company.com no need for /artifactory or anything else.
The url you posted is what the reverse proxy should use when directing to Artifactory (i.e. instance.company.com/artifactory/api/docker/test-docker-build/v2)
Then usage would look like docker push instance.company.com/myImage:myTag
Related
I followed the JFrog Blog post detailing the process but have not been successful in pulling any images from that mirror specifically. Other mirrors (such as docker-hub) do work correctly.
Here are screenshots I've taken to show the manifest syntax I followed when pulling the ironbank image through artifactory:
Manifest syntax example from the JFrog guide
Docker manifest not found
This is within the same major version of Artifactory, so I expected there not to be any API changes. I verified this by pulling other docker images through a docker-hub mirror repository.
The redacted-user matches the Username in Ironbank's registry1 User Profile and the CLI token was generated to use as the access token:
Artifactory mirror config
Has anyone gotten their ironbank mirror proxy to work through Artifactory?
I'm trying to push a Bitbucket repository to a private repository in Docker Hub as Docker image file. The build is successful to the point until I get this error:
docker push chatapp/monorepo
+ docker push chatapp/monorepo
The push refers to repository [docker.io/chatapp/monorepo]
An image does not exist locally with the tag: chatapp/monorepo
Does this have anything to do with how the Dockerfile inside the Bitbucket repository is written? Or are there some scripts missing in bitbucket-pipeline.yml file?
I'm new to Docker and I can't seem to figure this out.
The error signifies that the image you're trying to push doesn't exist locally on the machine you're trying to push from. Run docker images and check if it's there, if not there's a problem with the pipeline creating it, if it does but with a different name try and fix that.
I have a linux bare-metal server with docker installed.
I work on an asp.net core project on my computer.
My source code is pushed on github.
Each time i commit and push something, github triggers a webhook on my docker hub account.
Docker hub builds me a new image which contains my asp.net core application binaries. (docker hub also run the tests)
This image works fine when i pull it manually on my server.
My question is how can i do this automatically ? Is there a way for my server to "detect" that docker hub contains a new version of the image and run something to pull this image and fire database migrations automatically ?
Thanks
If you have a public ip which external internet such as dockerhub could visit you, then you can use Docker Hub Webhooks:
You can create webhooks like next diagram, set the url which external could visit your service, when image was pushed, it will post some json data to the url you afforded, one example data here, then your own url could receive data and do related things as you like.
And, if you use jenkins, there are lots of plugin help you to do similar things: refer Triggering Docker pipelines with Jenkins, also Polling Docker Registries for Image Changes
If you not have a public ip which dockerhub could visit, then I guess you had to poll dockerhub to see if new image there...
I am trying to build a jenkins job(trigger builds remotely) on docker image build, build all I am getting on docker hub is following:
HISTORY
ID Status Date & Time
7345... ! ERROR 10/12/17 10:03
Reason (I assume): Docker is not authenticated to post to the jenkins url.
Question: How can I trigger the job automatically when an image gets pushed to docker hub?
Pull and run Watchtower docker image to poll any third-party public Docker image on Docker Hub or Quay that you need (typically as a base image of your own containers). Here's how. "Polling" here does not imply crudely pulling the whole image every 5 minutes or so - we are monitoring periodically for changes in the image, downloading only the checksum (SHA digest) most of the time (when there are no changes in the locally cached image).
Install the Build Token Root Plugin in your Jenkins server and set it up to receive Slack-formatted notifications secured with a token to trigger builds remotely or - safer - locally (those triggers will be coming from Watchtower container, not Slack). Here's how.
Set up Watchtower to post Slack messages to your Jenkins endpoint upon every change in the image(s) (tags) that you want. Here's how.
Optionally, if your scale is so large that you could end up overloading and bringing down the entire Docker Hub with a flood HTTP GET requests (should the time triggers go wrong and turn into a tight loop) make sure to build in some safety checks on top of Watchtower to "watch the watchman".
You can try the following plugin: https://wiki.jenkins.io/display/JENKINS/CloudBees+Docker+Hub+Notification
Which claims to do what you're looking for.
You can configure a WebHook in DockerHub wich will trigger the Jenkins-Build.
Docker Hub webhooks targeting your Jenkings server endpoint require making periodic copies of the image to another repo that you own [see my other answer with Docker Hub -> Watchman -> Jenkins integration through Slack notifications].
More details
You need to set up a cron job with periodic polling (docker pull) of the source repo to [docker] pull its `latest' tag, and if a change is detected, re-tag it as your own and [docker] push to a repo you own (e.g. a "clone" of the source Docker Hub repo) where you have set up a webhook targeting your Jenkings build endpoint.
Then and only then (in a repo you own) will Jenkins plugins such as Docker Hub Notification Trigger work for you.
Polling for Dockerfile / release changes
As a substitute of polling the registry for image changes (which need not generate much network traffic thanks to the local cache of docker images) you can also poll the source Dockerfile on Github using wget. For instance Dockerfiles of the official Docker Hub images are here. In case when the Github repo makes releases, you can get push notifications of them using Github Watch > Releases Only feature and if they have CI docker builds. Docker images will usually be available with a delay after code releases, even with complete automation, so image polling is more reliable.
Other projects
There was also a proposal for a 2019 Google Summer of Code project called Polling Docker Registries for Image Changes that tried to solve this problem for Jenkins users (incl. apparently Google), but sadly it was not taken up by participants.
Run a cron job with a periodic docker search to list all tags in the docker image of interest (here's the script). Note that this script requires the substitution of the jannis/jq image with an existing image (e.g. docker run --rm -i imega/jq).
Save resulting tags list to a file, and monitor it for changes (e.g. with inotifywait).
Fire a POST request using curl to your Jenkins server's endpoint using Generic Webhook Trigger plugin.
Cautions:
for efficiency reasons this tags listing script should be limited to a few (say, 3) top pages or simple repos with a few tags,
image tag monitoring relies on tags being updated correctly (automatically) after each image change, rather than being stuck in the past, like say Ubuntu tags (e.g. trusty-20190515 was updated a few days ago - late November, without the change in its mid-May tag).
Configuring a mesos slave to use amazon ECR gives the following error when it receives a job:
Unsupported auth-scheme: BASIC
Does this look familiar to anyone ? I'm running the slave pointed to my user's docker config json file, which i updated by issuing a docker login beforehand.
Turns out it was my bad. I didn't modify the application that started this job to change the docker image name to include the registry as well.