Github actions using custom Docker image - docker

I have a custom docker image that run certain security checks on the Github repository and posts the results to a URL for review/analysis. I can see now that Github actions support custom Docker images. My question is will the runner environment variables and the clone repo be automatically mounted into the custom container or I need to pass them individually.
Also my custom container image is around 1G in size. Downloading and running it on every build will slow down the test and build. What is the best option in this case. Is there any way I can cache the image? If not are there any other workaround?
Thanks

My question is will the runner environment variables and the clone repo be automatically mounted into the custom container or I need to pass them individually.
Automatically mounted no, you have to tell him it should use your custom container:
container:
image: ghcr.io/owner/image
# ...
Refs: github docs
Is there any way I can cache the image? If not are there any other workaround?
I think you should check around cache action. I guess you should be able to download your image and cache the path where you download it or something like that.

Related

Docker full with URL path

Good day all,
Anyone knows if it's possible to just pull a single container from github? I do have this link https://github.com/aws/sagemaker-pytorch-training-toolkit and I will like to pull the container in this link https://github.com/aws/sagemaker-pytorch-training-toolkit/tree/master/src/sagemaker_pytorch_container.
I did try using build docker build -t https://github.com/abc/sagemaker-pytorch-training-toolkit.git to just build an image of one file but there's an init.py file which i'm not sure if its necessary.
Thanks
You are on a wrong path.
Github does not store docker images, so there is no way you can pull it from there.
AWS Sagemaker provides pre-built images, you just need to select the one you want to use when creating an instance. see https://docs.aws.amazon.com/sagemaker/latest/dg/howitworks-create-ws.html
If you need a docker with pytorch, just run docker pull pytorch/pytorch

How to instruct docker or docker-compose to automatically build image specified in FROM

When processing a Dockerfile, how do I instruct docker build to build the image specified in FROM locally using another Dockerfile if it is not already available?
Here's the context. I have a large Dockerfile that starts from base Ubuntu image, installs Apache, then PHP, then some custom configuration on top of that. Whether this is a good idea is another point, let's assume the build steps cannot be changed. The problem is, every time I change anything in the config, everything has to be rebuilt from scratch, and this takes a while.
I would like to have a hierarchy of Dockerfiles instead:
my-apache : based on stock Ubuntu
my-apache-php: based on my-apache
final: based on my-apache-php
The first two images would be relatively static and can be uploaded to dockerhub, but I would like to retain an option to build them locally as part of the same build process. Only one container will exist, based on the final image. Thus, putting all three as "services" in docker-compose.yml is not a good idea.
The only solution I can think of is to have a manual build script that for each image checks whether it is available on Dockerhub or locally, and if not, invokes docker build.
Are there better solutions?
I have found this article on automatically detecting dependencies between docker files and building them in proper order:
https://philpep.org/blog/a-makefile-for-your-dockerfiles
Actual makefile from Philippe's git repo provides even more functionality:
https://github.com/philpep/dockerfiles/blob/master/Makefile

Override a volume when Building docker image from another docker image

sorry if the question is basic but would it be possible to build a docker image from another one with a different volume in the new image? My use case is the following:
Start From image library/odoo (cfr. https://hub.docker.com/_/odoo/)
upload folders into the volume "/mnt/extra-addons"
build a new image, tag it then put it in our internal image repo
how can we achieve that? I would like to avoid putting extra folders into the host filesystem
thanks a lot
This approach seems to work best until the Docker development team adds the capability you are looking for.
Dockerfile
FROM percona:5.7.24 as dbdata
MAINTAINER monkey#blackmirror.org
FROM centos:7
USER root
COPY --from=dbdata / /
Do whatever you want . This eliminates the VOLUME issue. Heck maybe I'll write tool to automatically do this :)
You have a few options, without involving the host OS that runs the container.
Make your own Dockerfile, inherit from the library/odoo Docker image using a FROM instruction, and COPY files into the /mnt/extra-addons directory. This still involves your host OS somewhat, but may be acceptable since you wouldn't necessarily be building the Docker image on the same host you were running it.
Make your own Dockerfile, as in (1), but use an entrypoint script to download the contents of /mnt/extra-addons at runtime. This would increase your container startup time since the download would need to take place before running your service, but no host directories would need be involved.
Personally I would opt for (1) if your build pipeline supports it. That would bake the addons right into the image, so the image itself would be a complete, ready-to-go build artifact.

docker how to add host entry to generic image available in docker repository

A generic selenium/node-firefox docker image available in docker repository. I need to make changes/append to the image so that it have our test environment host entries.
What would be the best approach to do this. Should I just take the source and make the changes and build my own image?
In terms of maintainability it is possible to do it such a way that it always gets the base image and my changes append to it to make a new image? If so how can this be done?
When you run a docker container, there is an add-host argument that lets you specify what host entries you need to make available to the container. This would be similar to if you updated the /etc/hosts file.
docker run --add-host myserver:192.168.0.100 the-image-name
You don't need to update the source image to accomplish this. If you need to perform customizations to a docker image beyond what the runtime arguments give you, you can always derive your own Dockerfile from the image (although you should research best practices around deriving image files and not making deeply nested image files).
Here is a reference page.

How do I setup a docker image to dynamically pull app code from a repository?

I'm using docker cloud at the moment. I'm trying to figure out a development to production workflow using docker with docker compose to pull application code for multiple applications of the same type, but simply changing the repository each pulls from. I understand the concept of mounting a volume, but all the examples show the source code in the same repo with the dockerfile and docker compose file. example. I want the app code from this example to come from a remote, dynamic repo. Would I set an environment variable in the docker image? If so how?
Any example or link to a workflow example is appreciated.
If done right, the code "baked" into Docker images should be immutable and the only thing that should change at runtime is configurable parameters like environment variables (e.g. to set the port the app will listen on).
Ideally, you should bake your code into the image. Otherwise you're losing a lot of the benefit of using Docker in the first place.
The problem is..
.. your use case does not match with the best practice. You want an image without any code embedded in it, but rather fetched at each update. If you browse the docker hub you'll find many image named as service:version. That's one of the benefit of Docker, offering different versions of the same service. If you want to always get the most up-to-date code your workflow may have some down sides.
One solution could be
Webhooks, especially if your code is versionned on GH. Or any tools of continuous integration.

Resources