I have a Dockerfile and a tex file in my repository. I use Github Actions to build an image(ubuntu 18.10 with packages for PDFLaTeX) and run a container, which gets main.tex and produces main.pdf with PDFLaTeX. So far everything seems to work OK, but the problem is I can't copy the pdf from container to repository. I tried using docker cp:
docker cp pdf-creator:/main.tex .
But it doesn't seem to work, as pdf doesn't appear in my repository. Can you think of any other way to solve this?
The docker cp command copies a file into the local filesystem. In the context of a GitHub action, this is just whatever virtual environment is being used to run your code: it has nothing to do with your repository.
The only way to add something to your repository is to git add the file, git commit the change, and git push the change to your repository (which of course requires providing your Action with the necessary credentials to push changes to your repository, probably using a GitHub Secret).
But rather than adding the file to your repository, maybe you want to look at support for Artifacts? This lets you save files generated as part of your workflow and make them available for Download.
The workflow step would look something like:
- name: Archive generated PDF file
uses: actions/upload-artifact#v2
with:
name: main.pdf
path: /main.pdf
See the linked docs for more information.
Related
I have a (private) repository at GitHub with my project and integrated GitHub-actions which is building a docker-image and pushing it directly to GHCR.
But I have a problem with storing and passing secrets to my build image. I have the following structure in my project:
.git (gitignored)
.env (gitignored)
config (gitignored) / config files (jsons)
src (git) / other folders and files
As you may see, I have .env file and config folder. Both of them store data or files, which are not in the repo but are required to be in the built environment.
So I'd like to ask, is there any option not to pass all these files to my main remote repo (even if it's private) but to link them during the build stage within the github-actions?
It's not a problem to publish env & configs somewhere else, privately, in another separate private remote-repo. The point is not to push these files to the main-private-repo, because RBAC logic doesn't allow me to restrict access to the selected files.
P.S. Any other advice of using GitLab CI or BitBucket, if you know how to solve the problem is also appreciated. Don't be shy to share it!
So it seems that this question is a bit hot, so I have found an answer for it.
Example that is shown above is based on node.js and nest.js app and pulling the private remote repo from GitHub.
In my case, this scenario was about pulling from separate private repo config files and other secrets. And we merge them with our project during container build. This option isn't about security of secrets inside container itself. But for making one part of a project (repo itself with business logic) available to other developers (they won't see credentionals and configs from separate private repo, in your development repo) and a secret-private repo with separate access permission.
You all need your personal access token (PAT), on github you can found it here:
As for GitLab, the flow is still the same. You'll need to pass token from somewhere in the settings. And also, just a good advice, create not just one, but two docker files, before testing it.
Why https instead of ssh? In that case you'll need also to pass ssh keys and also config the client correctly. It's a bit more complicated because of CRLF and LF formats, crypto-algos supported by ssh and so on.
# it could be Go, PHP, what-ever
FROM node:17
# you will need your GitHub token from settings
# we will pass it to build env via GitHub action
ARG CR_PAT
ENV CR_PAT=$CR_PAT
# update OS in build container
RUN apt-get update
RUN apt-get install -y git
# workdir app, it is a cd (directory)
WORKDIR /usr/src/app
# installing nest library
RUN npm install -g #nestjs/cli
# config git with credentials
# we will use https since it is much easier to config instead of ssh
RUN git config --global url."https://${github_username}:${CR_PAT}#github.com/".insteadOf "https://github.com/"
# cloning the repo to WORKDIR
RUN git clone https://github.com/${github_username}/${repo_name}.git
# we move all files from pulled repo to root of WORKDIR
# including files named with dot at the beginning (like .env)
RUN mv repo_folder/* repo_folder/.[^.]* . && rmdir repo_folder/
# node.js stuff
COPY package.json ./
RUN yarn install
COPY . .
RUN nest build app
CMD wait && ["node"]
As a result, you'll see a fully container with your code merged with files and code from other separate repo which we pull from.
I have a repository that has multiple directories each containing a python script, a requirements.txt, and a Dockerfile. I want to use Github actions to build an image on merge to the main branch and push this image to Github packages. Each Dockerfile defines a running environment for the accompanying python script. But want to build only that directory's Dockerfile on which changes are made and tag each image with the directory name and a version number that is different from other directories version changes. How can I accomplish this? An example workflow.yml file would be awesome.
Yes, it is possible.
In order to trigger a separate build for every Dockerfile, you will need a separate workflow - e.g for every Dockerfile - one workflow. The workflows should be triggered on push to main and specify paths for the Dockerfile.
For building the images themselves you can use GitHub Action Build and push Docker images
I saw on another thread for building specific Dockerfile which has been changed by using another github action.
Build Specific Dockerfile from set of dockerfiles in github action
I want to create a github docker container action that uses files from the repository using the action.
Do I need to check the repository out by myself or can I specify a point where the files are copied if it was checked out before?
Github actions do not have a concept of repository or branch. Each workflow is run in a container and only has that container to interact with.
Actions can have inputs. You can create an input that should point to the location of the user's repository.
In your action.yml:
inputs:
repo_path:
description: 'The path to the repository in the current environment'
required: true
In your code, you can check for the repo_path input to get the path to the repository on the file system.
You can also check if the GITHUB_WORKSPACE points to a git repository which means that the user had used the actions/checkout action in a prior step.
I am trying to learn Docker from other DockerFiles and and set up a customised development environment for my projects.
But from other DockerFiles, I don't understand - where are those src files coming from for ADD and COPY? How do I create them myself? What code should I put inside them?
For instance, fauria/lamp:
COPY run-lamp.sh /usr/sbin/
Where can I get this file or create it? What are the lines inside that file?
again, nickistre/ubuntu-lamp:
ADD supervisord.conf /etc/
Where can I get a copy of it?
Another one, linuxconfig/lamp:
# Include supervisor configuration
ADD supervisor-lamp.conf /etc/supervisor/conf.d/
ADD supervisord.conf /etc/supervisor/
supervisor-lamp.conf and supervisord.conf?
Any ideas?
When you run a docker build ., files in the folder . that are not included inside the .dockerignore file are sent to the Docker engine. From this context of files, docker performs the COPY or ADD commands.
With your first example, the Dockerfile is located in a github repo (linked on the right side of the page on the Docker hub), and inside that repo is the run-lamp.sh script. Therefore if you're trying to reproduce the image, you would checkout the linked github repo and perform your build from within that folder.
I have a private git repository that I have to add to my docker image. For that I clone it locally in the same directory with the Dockerfile and then use the following docker command:
ADD my_repo_clone /usr/src/
My repo has a version tag that I clone, v1. So the files that I clone are always the same.
The problem is that when I build this docker image I always get a new image instead of replacing the old one:
docker build --rm -t "org_name/image_name" .
Apparently, because the ctime of the files change, the docker cache is not seeing my files as identical so I get always a new image, which I want to avoid.
I tries to touch the cloned repo and change atime and mtime to be a fixed date, but it is still not enough.
How can I stop Docker (without changing Docker source code that computes the file hashes and building it again) from creating all the time new images.
Or how can I clone the repo during the image building process? (For this I need SSH forwarding since the repo is private, and I could also not make SSH agent forwarding work during an image build process)
since you don't care about the repository itself and just need the files for tag v1, you could use git archive instead of git clone to produce a tar archive holding the files for tag v1.
Finally, the docker ADD directive to inject the archive into the image.
The mtime of the produced tar archive will be the time of the tag as documented:
git archive behaves differently when given a tree ID versus when given a commit ID or tag ID. In the first case the current time is used as the modification time of each file in the archive. In the latter case the commit time as recorded in the referenced commit object is used instead.
try:
git archive --remote=https://my.git.server.com/myoproject.git refs/tags/v1 --format=tar > v1.tar