I'm using a github custom composite action, and this action build and image and then run it,
here is the link for the action (and the Dockerfile)
https://github.com/convictional/trigger-workflow-and-wait
The problem is that i'm hitting a docker limitation about pulling from dockerhub without being authentication
enter image description here
I tried to look for a way to authenticate to dockerhub with my user but can't seem to find a way to do it
Is there any way to authenticate before using the composite action ? to prevent the pull limit from occuring
I tried to authenticate in a step before the use of the action but this didn't work because the build of the image is the first step executed in the job, so my authentication will be ignored
I tried to authenticate in a job before the job where i use the action nad then (cache/artifact) the config.json file but it didn't work, I still have the rate limit
Related
Is it possible to save the build of the Github action so that it does not download this action the next time, then the problem is that I created an action that collects php with certain settings and everything is in the docker image, it takes about 2 minutes to beat the date image, which is a lot, since in subsequent executions, it makes no sense to rebuild the whole image and use the result of the build of this image.
Is there some way to cache it or save this docker build of the file somewhere?
You can use the native caching - GitHub has an action for this: https://github.com/actions/cache .
You'd place this caching step before your build step.
Essentially you'd create a key from a hash of a file (e.g. a manifest file) or directory. Then, the Cache action would check if that key matches the key of the cache. If it matches, the cache is reloaded. If it doesn't match, then in your build step, you'd include an if directive with something like steps.build-cache.outputs.cache-hit != 'true' to ensure that the build step only runs if the cache is invalid.
The docker-layer-caching action is built on actions/cache, and can be used to cache separate build layers of your Docker container.
I'm working on CI tool to build new docker images and then push them to our registry in AWS ECR. However, I just noticed that I have built an image several times an image that didn't change. This means that I have created and push several tags for the same image id. I would like to avoid spamming our registry with redundant tags. my question is:
is there a way to check the registry for an image id before pushing the image that I just built?
There are multiple ways to handle this
case 1(which i don't feel is right)
check for the tags in ECR as a precheck and then build image
case 2(which we use currently)
make use of git hook to trigger the pipeline(or build) only when there is a change into the repository
Also on a sidenote, tags with commit hash or datetimestamp would be helpful if its only the binary that keeps changing over the course to keep track of the dependencies which this dockerfile is depended over.
I'm wondering how I can check that a docker image exists in a private registry (in eu.gcr.io), without pulling it.
I have a service, written in golang, which needs to check for the existence of a docker image in order to validate a config file passed to it by a user.
Pulling the image using the go docker client, as shown here, works. However, I don't want to pull down images just to check they exist, as they can be large.
I've tried using Client.ImageSearch, but his just searches for public images. the cloud.google.com/go package also doesn't seem to have anything for dealing with the container registry.
There's possibly this and the crane tool it contains, but I'm really struggling to figure out how it works. The documentation is... not great.
I'd like the solution to be host agnostic, and the only option I have found is to simply make a http request and use the logic from this answer.
Are there any docker or other packages able to do this in a cleaner way?
Just realised the lib I've been using has an unhelpfully named client method DistributionInspect (link), which will just return the image digest and manifest, if it's found. so the image doesn't get pulled down.
The Nexus 3 Docker Repo solution requires that re-deploy is allowed in order for the usage of latest tag to work (as described here Allow redeploy for "latest" docker tag in Nexus OSS)
Since we want our uploaded images to be immutable, but at the same time allow the usage of tagging with latest it puts us into a dilemma.
Current best thinking is to have a white-list of tags that can be re-deployed
(tags like latest, production etc)
This means that we must be able to trap any Docker Image upload request prior to that the storage request is processed. In the pre-check we must then block the re-deploy if the image is already present in the repo and that the given image tag is NOT within the white-list..
We already have a custom bundle, so we were hoping that we could extend the functionality with this new blocking feature.
Is it possible to something like this?
I am trying to use bitbucket docker image as documented in the below link.
https://bitbucket.org/atlassian/docker-atlassian-bitbucket-server
I progressed to
bring up the container.
hit the url and create admin user.
But it does not log me in.
The message is invalid userId/password.
The reset wont work because being a container, the mail service is not set up.
I cannot proceed past this and it is frustrating.
Any help is appreciated.