Cloud Build docker build has a different build output to local docker build? - docker

I've run into a very strange issue where a Dockerfile is failing in one of it's steps when it's built on GCP Cloud Build.
However it builds locally just fine.
What might be the cause of the issue? Why would there by any difference?
The actual command that is failing to build is a npm build within the container.

Turned out to be a .env file that I had locally (but was not present in the repository due to a gitignore).

Related

Debug why gradle caching fails across successful docker build instances?

I am looking at trying to make the gradle 6.9 cache work in a docker CI build invoked by Jenkins running in Kubernetes without access to scan.gradle.org.
The idea is to save an image after gradle --build-cache --no-daemon classes bootJar and use that as the ‘FROM‘ of subsequent builds. This works for me on my own machine, but I cannot make it work on the Jenkins server. Everything happens in the gradle home directory so everything should be cached. I am wondering if the path to that matters, as it is deep in a Kubernetes mount under ‘/var‘ and this is the only differences between the two docker builds I can think of.
Caching would preferably be for the whole build, but just caching the Maven dependencies would be a substantial saving.
What am i missing? Is there a way to get insight in why Gradle decides to use what it has already or not?

Docker stack deploy not deploying the images that is being build after code changes in repo

I have created jobs for build and deploy.
The jobs are running perfectly without no errors and the project is deployed successfully.
The problem is though the repository is changed the build is not showing up the changes when tested in the browser although the workspace is successfully pulling the latest code with changes.
Few tried solutions
Made the docker image build not use the cache
Made the workspace clear before starting the build
Nothing seems working. Where might have things gone wrong?
My project structure:
Project directory: Which will have all the laravel project
Dockerfile: to build the image
docker-compose.yml: To deploy the service in docker stack
I am running the service in docker stack using docker stack deploy command
I tried to delete the previously build docker stack .By naming the docker stack unique with buildID and re-create a new stack but it didn't solve either.
On trying to remove the stack and generate that again. I get these issues.
Failed to remove network 7bkf0kka11s08k3vnpfac94q8: Error response from daemon: rpc error: code = FailedPrecondition desc = network 7bkf0kka11s08k3vnpfac94q8 is in use by task lz2798yiviziufc5otzjv5y0gFailed to remove some resources from stack: smstake43
Why is the docker stack not taking the updated image looks like it's due to the digest issue. How can it be solved if it is? Though the changes are pushed to Btbucket code repo the deployment is not showing changes due to it. If I completely do all the setup then only it fetches the latest codes.
This is bash scripts for build in jenkins jobs
#!/bin/sh
docker build -t smstake:latest .
docker stack deploy -c docker-compose.yml smstake

Best practice/way to develop Golang app to be run in Docker container

Basically what the title says... Is there a best practice or an efficient way to develop a Golang app that will be Dockerized? I know you can mount volumes to point to your source code, and it works great for languages like PHP where you don't need to compile your code. But for Go, it seems like it would be a pain to develop alongside Docker since you pretty much only have two options I guess.
First would be to have a Dockerfile that is just onbuild so it starts the go app when a container is run, thus having to build a new image on every change (whether it be small or not). Or, you do mount your source code dir to the container dir, then attach to the container itself and do the manual go build/run yourself as if you would normally.
Those two ways are really the only way that I see it happening unless you just don't develop your Go app in a docker container. Just develop it as normal, then use the scratch image method where you pre build the Go into a binary then copy that into your container when you are ready to run it. I assume that is probably the way to go, but I wanted to ask more professional people on the subject and maybe get some feedback on the topic.
Not sure it's the best pratice but here is my way.
Makefile is MANDATORY
Use my local machine and my go tools for small iterations
Use a dedicated build container based on golang:{1.X,latest}, mount code directory to build a release, mainly to ensure that my code will build correctly on the CI. (Tips, here is my standard go build command for release build : CGO_ENABLED=0 GOGC=off go build -ldflags -s -w)
Test code
Then use a FROM scratch to build a release container (copy the bin + entrypoint)
Push you image to your registry
Steps 3 to 6 are tasks for the CI.
Important note : this is changing due to the new multistage builds feature : https://docs.docker.com/engine/userguide/eng-image/multistage-build/, no more build vs release containers.
Build container and release container will be merged in one multistage build so one Dockerfile with (not sure about the correct syntax but, you will get the idea) :
FROM golang:latest as build
WORKDIR /go/src/myrepos/myproject
RUN go build -o mybin
FROM scratch
COPY --from=build /go/src/myrepos/myproject/mybin /usr/local/bin/mybin
ENTRYPOINT [ "/usr/local/bin/mybin" ]
Lately, I've been using
https://github.com/thockin/go-build-template
As a base for all of my projects. The template comes with a Makefile that will build/test your application in a Docker.
As far as I understood from you question, you want to have a running container to develop a golang application. The same thing can be done in your host machine also. But good thing is that if you could build such application, then that will be consider as cloud Platform-as-a-Service(PaaS).
The basic requirement of the container will be: Ubuntu image and other packages such as editor, golang compiler and so on.
I would suggest to look on the docker development environment.
https://docs.docker.com/opensource/project/set-up-dev-env/
The docker development environment is running inside a container and the files are mounted from one of the host directory. The container image is build from Ubuntu scratch image and added required packages which are needed to compile docker source code.
I hope you almost got what you are looking for.

How to cache downloaded dependencies for a Jenkins Docker SSH Slave (Gradle)

We have Jenkins Docker Slave template that successfully builds a piece of software for example a Gradle project. This is based on the https://hub.docker.com/r/evarga/jenkins-slave/).
When we fire up the docker slave the dependencies are downloaded everytime we do a build. We would like to speed up the build so dependencies that are downloaded can be reused by the same build or even by other builds.
Is there a way to specify an external folder so that cache is used? Or another solution that reuses the same cache?
I think, the described answers work only for exclusive caches for every build-job. If I have different jenkins-jobs running on docker-slaves, I will get some trouble with this scenario. If the jobs run on the same time and write to the same mounted cache in the host-filesystem, it can become corrupted. Or you must mount a folder with the job-name as part of the filesystem-path (one jenkins-job run only once at a time).
Here's an example for maven dependencies, it's exactly what Opal suggested. You create a Volume, wich refers to cache folder of the host.

Docker Hub - Automated Build failed, but local build without problems

I have created the repository ldaume/docker-highcharts-server in the Docker Hub Registry which is connected to a github repository which contains the Dockerfile.
If I build the image locally it works like a charm.
But the automated build fails with the error Unknown Build Error. and no logs. The only content I can see in the build informations is the Dockerfile, so docker had no problems with github ;).
Any ideas?
In my scenario I had a file in my current directory that was not checked in, but in my Dockerfile I had it COPYing it into the container. Have you looked into that possibility?

Resources