GitHub Dockerfile not found at ./Dockerfile (.NET Core) - docker

I know there is some information regarding this around the web, but I am new to this technology, and the information didn't seem to help my issue.
I am trying to trigger a docker build which is connected to my GitHub repository.
When I checked the solution into master from Visual Studio, I triggered the docker build and it failed with the below.
However, a potential issue could be that the Dockerfile is present in Github but not in the root folder. The file is found when I go into the solution folder which is how Visual Studio added the file.
This is where the Dockerfile is one level down.
And it's in here.
And I do have the Dockerfile present in the solution which I published as an Image to Docker Hub.
Anything obvious I have missed?
Thanks

By default, DockerHub looks for a Dockerfile at the root of your project.
Because it is not the case here, you should specify the path to your Dockerfile in the Build rules section :
Specify the Dockerfile location as a path relative to the root of the source code repository. (If the Dockerfile is at the repository root, leave this path set to /.)
This screenshot shows some configuration (there are multiple build rules here, but focus on the column Dockerfile location) :
In your example, you should set Dockerfile location to MyFirstContainerApp/Dockerfile.

Related

Cloud Build trigger changes file permissions and breaks docker caching

I am using google-cloud-build as CI to test if a PR breaks the build or not.
The build is basically creating a Docker image.
To reduce the build time, I am trying to use dockers --cache-from feature, but it fails for me on a COPY ... because when using a Github App trigger, most file permissions are changed for some reason.
When using a Github trigger, this issue does not happen, but I cannot trigger it on a PR as stated here.
Is there a way to prevent from cloud build to change file permissions when using a Github App trigger? is there another way to solve this?
For now, we decided that this is fine and if we want specific file permissions on a file, we will manually set them in the docker file using RUN chmod ...
After some testing I've found that actually Cloud Build does not change the file permissions. Those are preserved from the origin where Cloud Build is getting the resources but I've found that it seems that GitHub changes those permissions.
I had 2 files with the permissions -r--r--r-- and when I pushed them to GitHub then in Cloud Build I could see that those files had the permissions -rw-rw-r--. Then To be sure what was happening, I cloned the repo in another site and the files that were pulled from the repo had the permissions -rw-rw-r--. So the cause seems to be GitHub.
As you mentioned in your answer, the best approach is to change permissions at build time.

`gcloud builds submit` for Cloud Run

I have this situation, because the documentation was not clear. The gcloud builds submit --tag gcr.io/[PROJECT-ID]/helloworld command will
archive the contents of my source folder and then run the docker build on the Google build server.
Also it is only looking at the .gitignore file for the contents to archive. If it is a docker build, it should honor the .dockerignore file.
Also there is no word about how to compile the application. It has to be compiled if is not precompiled application before it is dockerized.
the quick guide only considers that the application is a precompiled one and all the contents of the folder as per the .gitignore are required required to run the application. People will not be aware of all that for a new technology. I have just figured it out by myself.
So, the alternate way of doing all that is either include the build steps in the docker file (which will make my image heavy) or create a docker image locally (manually) and then submit the image to the repository (manually) and then publish to the cloud run (using the second command documented or manually).
Is there anything I am missing over here?
Cloud Build respects .dockerignore. It will upload all files that are not in .gitignore, but once uploaded, it will respect .dockerignore regarding which files to use for the build.
Compiling your application is usually done at the same time as "containerizing" it. For example, for a Node.js app, the Dockerfile must run npm install --production. I recommend looking at the many examples in the quickstart.
I think you've got it, essentially your options are:
Building using Cloud Build
Building locally and pushing using Docker
Generally if you need additional build steps, I would recommend including them in your Docker file. Ideally you should be able to go from source + Dockerfile to a complete image in either case.

VSTS Template: Container (PREVIEW) proper steps?

I'm trying Container (PREVIEW) in Visual Studio Team Services. I really have no idea about containers just curious. So far I was able to run Ubuntu and install Docker, also created account in Docker Hub, and then able to established Docker Registry Connection. Now I queue the build and get this error.
Unhandled: No Docker file matching /home/christianlouislivioco/myagent/_work/1/s/**/Dockerfile was found.
How to resolve this?
Also these questions:
What procedures did I miss? What to do next? Any tips regarding this?
Thanks in advance.
Regarding Docker (Build an image) task:
If you check Use Default Build Context option, then it uses source directory (e.g. [agent working directory]\1\s) as the build context, so the Dockerfile need to be existing in the source directory (can be in child folder). (Map server files to build agent in Get sources section, you also can copy Dockerfile to source directory)
If you uncheck Use Default Build Context, you can specify the Build context path that contains Dockerfile.
Based on your description, you are using default build context with Container (Preview) template, so check the Source setting in Get sources section.
After pushing image to server (E.g. Docker Hub, Azure Container Registry), you can run this image or do other things. docker run

Where is the docker file located?

I have pulled an image from the docker hub. I want to modify the docker file for this image. Where is the docker file located on my Machine? Thanks.
The Dockerfile isn't on your machine. Once the image is built, it exists independently of the Dockerfile, just like a compiled C program doesn't need its source code kept around to function. You can partially recover the Dockerfile via the docker history command, but this won't show you files added with ADD or COPY, and various "image squishing" programs can further obfuscate things. The recommended way to get an image's Dockerfile is to go to the repository from which the image was built (hopefully linked from the image's Docker Hub page). If there's no public repository, you're out of luck.
I'm not sure if this helps, I'm a noob in this.
Pulling an image alone is going to download the image alone, without the supplementary files. If you need those, please go to the Github webpage where the image is (if you got it from Github), then push clone or download button on the right. If you have no access to webpages (on a VM for instance), install 'developer tools' and then write 'git clone ' where is the link that appears after pressing clone or download.
If you check the directory where you cloned the link, you'll find all the files you need.

Where to keep Dockerfile's in a project?

I am gaining knowledge about Docker and I have the following questions
Where are Dockerfile's kept in a project?
Are they kept together with the source?
Are they kept outside of the source? Do you have an own Git repository just for the Dockerfile?
If the CI server should create a new image for each build and run that on the test server, do you keep the previous image? I mean, do you tag the previous image or do you remove the previous image before creating the new one?
I am a Java EE developer so I use Maven, Jenkins etc if that matter.
The only restriction on where a Dockerfile is kept is that any files you ADD to your image must be beneath the Dockerfile in the file system. I normally see them at the top level of projects, though I have a repo that combines a bunch of small images where I have something like
top/
project1/
Dockerfile
project1_files
project2/
Dockerfile
project2_files
The Jenkins docker plugin can point to an arbitrary directory with a Dockerfile, so that's easy. As for CI, the most common strategy I've seen is to tag each image built with CI as 'latest'. This is the default if you don't add a tag to a build. Then releases get their own tags. Thus, if you just run an image with no arguments you get the last image built by CI, but if you want a particular release it's easy to say so.
I'd recommend keeping the Dockerfile with the source as you would a makefile.
The build context issue means most Dockerfiles are kept at or near the top-level of the project. You can get around this by using scripts or build tooling to copy Dockerfiles or source folders about, but it gets a bit painful.
I'm unaware of best practice with regard to tags and CI. Tagging with the git hash or similar might be a good solution. You will want to keep at least one generation of old images in case you need to rollback.

Resources