I am using google-cloud-build as CI to test if a PR breaks the build or not.
The build is basically creating a Docker image.
To reduce the build time, I am trying to use dockers --cache-from feature, but it fails for me on a COPY ... because when using a Github App trigger, most file permissions are changed for some reason.
When using a Github trigger, this issue does not happen, but I cannot trigger it on a PR as stated here.
Is there a way to prevent from cloud build to change file permissions when using a Github App trigger? is there another way to solve this?
For now, we decided that this is fine and if we want specific file permissions on a file, we will manually set them in the docker file using RUN chmod ...
After some testing I've found that actually Cloud Build does not change the file permissions. Those are preserved from the origin where Cloud Build is getting the resources but I've found that it seems that GitHub changes those permissions.
I had 2 files with the permissions -r--r--r-- and when I pushed them to GitHub then in Cloud Build I could see that those files had the permissions -rw-rw-r--. Then To be sure what was happening, I cloned the repo in another site and the files that were pulled from the repo had the permissions -rw-rw-r--. So the cause seems to be GitHub.
As you mentioned in your answer, the best approach is to change permissions at build time.
Related
We use JFrog Artifactory for managing Docker images created from Dockerfiles. It has a nice feature where you can see all the "layers" that were involved in creating any given final Docker image.
We have to be careful though, so that credentials do not wind up showing in the layers where they were used. The way we currently do this is by using multistage builds with "COPY --from".
However recently we needed to use credentials for a particular yum repository, which is supplying many dependencies that we need (thousands of files spread throughout the file system). I used yum-config-manager to set the password and username from ENV variables. However even if I use FROM depbuilder, the commands from all the prior stages (including depbuilder) now become visible in Artifactory.
I need to avoid that from happening, and a colleage suggested that we could simply do this:
COPY --from=depbuilder / /
And that way it wouldn't show the other stage's steps as part of the history of the build in Artifactory. However I'm afraid that this command might not set all the ownership and permissions correctly, or it might miss certain files, since the documentation on how it works seems spotty at best.
So what's the best way to copy everything from a prior build stage in a way that would be invisible to someone looking at the build layers in Artifactory?
Here is a screenshot showing what the layers look like in Artifactory: (if we expand the RUN steps, currently we could see the credentials passed into docker via ENV since they become part of the URL for Artifactory)
Thanks for any help!
I have this situation, because the documentation was not clear. The gcloud builds submit --tag gcr.io/[PROJECT-ID]/helloworld command will
archive the contents of my source folder and then run the docker build on the Google build server.
Also it is only looking at the .gitignore file for the contents to archive. If it is a docker build, it should honor the .dockerignore file.
Also there is no word about how to compile the application. It has to be compiled if is not precompiled application before it is dockerized.
the quick guide only considers that the application is a precompiled one and all the contents of the folder as per the .gitignore are required required to run the application. People will not be aware of all that for a new technology. I have just figured it out by myself.
So, the alternate way of doing all that is either include the build steps in the docker file (which will make my image heavy) or create a docker image locally (manually) and then submit the image to the repository (manually) and then publish to the cloud run (using the second command documented or manually).
Is there anything I am missing over here?
Cloud Build respects .dockerignore. It will upload all files that are not in .gitignore, but once uploaded, it will respect .dockerignore regarding which files to use for the build.
Compiling your application is usually done at the same time as "containerizing" it. For example, for a Node.js app, the Dockerfile must run npm install --production. I recommend looking at the many examples in the quickstart.
I think you've got it, essentially your options are:
Building using Cloud Build
Building locally and pushing using Docker
Generally if you need additional build steps, I would recommend including them in your Docker file. Ideally you should be able to go from source + Dockerfile to a complete image in either case.
I am new to Kubernetes and so I'm wondering what are the best practices when it comes to putting your app's source code into container run in Kubernetes or similar environment?
My app is a PHP so I have PHP(fpm) and Nginx containers(running from Google Container Engine)
At first, I had git volume, but there was no way of changing app versions like this so I switched to emptyDir and having my source code in a zip archive in one of the images that would unzip it into this volume upon start and now I have the source code separate in both images via git with separate git directory so I have /app and /app-git.
This is good because I do not need to share or configure volumes(less resources and configuration), the app's layer is reused in both images so no impact on space and since it is git the "base" is built in so I can simply adjust my dockerfile command at the end and switch to different branch or tag easily.
I wanted to download an archive with the source code directly from repository by providing credentials as arguments during build process but that did not work because my repo, bitbucket, creates archives with last commit id appended to the directory so there was no way o knowing what unpacking the archive would result in, so I got stuck with git itself.
What are your ways of handling the source code?
Ideally, you would use continuous delivery patterns, which means use Travis CI, Bitbucket pipelines or Jenkins to build the image on code change.
that is, every time your code changes, your automated build will get triggered and build a new Docker image, which will contain your source code. Then you can trigger a Deployment rolling update to update the Pods with the new image.
If you have dynamic content, you likely put this a persistent storage, which will be re-mounted on Pod update.
What we've done traditionally with PHP is an overlay on runtime. Basically the container will have a volume mounted to it with deploy keys to your git repo. This will allow you to perform git pull operations.
The more buttoned up approach is to have custom, tagged images of your code extended from fpm or whatever image you're using. That way you would run version 1.3 of YourImage where YourImage would contain code version 1.3 of your application.
Try to leverage continuous integration and continuous deployment. You can use Jenkins as CI/CD server, and create some jobs for building image, pushing image and deploying image.
I recommend putting your source code into docker image, instead of git repo. You can also extract configuration files from docker image. In kubernetes v1.2, it provides new feature 'ConfigMap', so we can put configuration files in ConfigMap. When running a pod, configuration files will be mounted automatically. It's very convenience.
(I understand this question is somewhat out of scope for stack overflow, because contains more problems and somewhat vague. Suggestions to ask it in the proper ways are welcome.)
I have some open source projects depending in each other.
The code resides in github, the builds happen in shippable, using docker images which in turn are built on docker hub.
I have set up an artifact repo and a debian repository where shippable builds put the packages, and docker builds use them.
The build chain looks like this in terms of deliverables:
pre-zenta docker image
zenta docker image (two steps of docker build because it would time out otherwise)
zenta debian package
zenta-tools docker image
zenta-tools debian package
xslt docker image
adadocs artifacts
Currently I am triggering the builds by pushing to github and sometimes rerunning failed builds on shippable after the docker build ran.
I am looking for solutions for the following problems:
Where to put Dockerfiles? Now they are in the repo of the package needing the resulting docker image for build. This way all information to build the package are in one place, but sometimes I have to trigger an extra build to have the package actually built.
How to trigger build automatically?
..., in a way supporting git-flow? For example if I change the code in zenta develop branch, I want to make sure that zenta-tools will build and test with the development version of it, before merging with master.
Are there a tool with which I can overview the health of the whole build chain?
Since your question is related to Shippable, I've created a support issue for you here - https://github.com/Shippable/support/issues/2662. If you are interested in discussing the best way to handle your scenario, you can also send me an email at support#shippable.com You can set up your entire flow, including building the docker images, using Shippable.
Situation:
I installed Jenkins on my vserver and setup a "freestyle pipeline". I connected it via webhook push to my github which works (when I push to the repository, a new build job is started in jenkins).
Problem:
I can't seem to find the working directory where the git pull is executed in. I already searched for answers and many people say $JENKINS_HOME, but echo $JENKINS_HOMEreturns a blank line for me. Did I do anything wrong or where is my project then? Also, can I set the path to where the repository is pulled to a custom path (say /root/myprojectname)?
EDIT:
I can see the workspace in Jenkins webuserinterface but I can't find the corresponding folder on the vservers drive.
Did you check in /var/lib/jenkins. By default the jenkins home directory lies there as well in case of linux servers. It should also show you the home directory by browsing Manage Jenkins--> Configure System