Has anyone been able to create a cake.build file that compiles a c# code then creates a docker container? I would like to be able to create a docker file once the base code is built and then run the docker image in a container.
You can build and run Docker images from your Cake scripts using the Cake.Docker community addin.
Add #addin nuget:?package=Cake.Docker to the top of your build script and you can then use the DockerBuild alias to build your container. You can also optionally use DockerRun to run your container.
You can find full documentation on this addin on the website, including for DockerBuild (and DockerRun).
For example, assuming your Dockerfile in a folder called docker:
#addin nuget:?package=Cake.Docker
// the rest of your build script
Task("Docker-Build")
.Does(() => {
var settings = new DockerImageBuildSettings { Tag = new[] {"dockerapp:latest" }};
DockerBuild(settings, "./Docker");
});
Related
I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.
I'm trying to build a ruby-on-rails project, using rails 1.9.3 on Debian image.
After I've built it, using dockerfile, it appears that a directory is missing. So the container doesn't start. So, can I add it manually? I've tried to use "docker run -it sh" to run it as shell, but for some reason, after I add a directory with mkdir it vanishes, when I exit.
I'm kinda new to this stuff (just did some tutorials), so apologize for any mixed up details.
You are going to need to add the dir, and then commit the changes in the container to make a new image out of it to use the directory in the new image. Its much better to use a repeatable DockerFile to create the image
Documentation for DockerFile -> https://docs.docker.com/engine/reference/builder/
Have a look at the documentation for commit here -> https://docs.docker.com/engine/reference/commandline/commit/
I have a project which compiles to a binary file, and running that binary file exposes some REST APIs.
To compile the project I need docker image A which has the compiler and and all the libraries required to produce the executable. To run the executable (ie. host the service) I can get away with a much smaller image B (just basic linux distro, no need for the compiler).
How does one use docker is such a situation?
My thinking for this scenario is that you can prepare two base images:
The 1st one, which includes compiler and all libs for building your executable, call it base-image:build
The 2nd one, as the base image to build your final image to delivery, call it base-image:runtime
And then break your build process into two steps:
Step 1: build your executable inside base-image:build, and then put your executable to some place, like NFS or any registry from where you can fetch it for later use;
Step 2: write your Dockerfile which FROM base-image:runtime, fetch your artifact/executable from wherever generated by Step 1, docker build your delivery image, and then docker push to your registry for release.
Hope this could be helpful :-)
mkdir local_dir
docker run -dv $PWD/local_dir:/mnt BUILD_CONTAINER
compile code and save it to /mnt in the container. It'll be written to local_dir on your host filesystem and persist after the container is destroyed.
You Should now write a Dockerfile and add a step to copy in the new binary, then build. But for example's sake...
docker run -dv $PWD/local_dir:/mnt PROD_CONTAINER
Your bin, and everything else in local_dir, will reside in the container at /mnt/
I want to make screen shots of my app in CI tests using specific runner (its working on my pc, its not "shared-runner").
Its a basic docker runner.
I tried to pass builds_dir to runner config with full path /home/myuser/mybuilds but after successful test, the folder is empty. (it seems its relative to build, not the linux path)
The screenshots are probably to be taken but paths seem to be relative.:
app.browserWindow.capturePage().then(function (imageBuffer) {
fs.writeFile('/home/myuser/mybuilds/page-full-path.png', imageBuffer)
})
So how to access build folder after the build in docker?
EDIT
I could use artifacts like this:
app.browserWindow.capturePage().then(function (imageBuffer) {
console.log("This test path");
console.log(path.join(__dirname));
var mypath = path.join(__dirname, '../..', 'testimgages', "test.png");
fs.writeFile(mypath, imageBuffer)
})
and then in gitlab-ci.yml
mytest
script:
- mytests
artifacts:
paths:
-testimages
and then download images from gitlab.com but is there a way to make it faster or local viewing without going to gitlab.com?
If it's on your server then you can just add something like:
-v /some/local/path:/data/screenshots:rw
to your docker run command. It will bind mount local dir on your server to /data/screenshots in container. Then all you save there will appear in local dir.
If you don't want to add volume you could copy files using docker cp:
docker cp <containerId>:/data/screenshots /local/path
You can find your container id with docker ps.
I used to list the tests directory in .dockerignore so that it wouldn't get included in the image, which I used to run a web service.
Now I'm trying to use Docker to run my unit tests, and in this case I want the tests directory included.
I've checked docker build -h and found no option related.
How can I do this?
Docker 19.03 shipped a solution for this.
The Docker client tries to load <dockerfile-name>.dockerignore first and then falls back to .dockerignore if it can't be found. So docker build -f Dockerfile.foo . first tries to load Dockerfile.foo.dockerignore.
Setting the DOCKER_BUILDKIT=1 environment variable is currently required to use this feature. This flag can be used with docker compose since 1.25.0-rc3 by also specifying COMPOSE_DOCKER_CLI_BUILD=1.
See also comment0, comment1, comment2
from Mugen comment, please note
the custom dockerignore should be in the same directory as the Dockerfile and not in root context directory like the original .dockerignore
i.e.
when calling
DOCKER_BUILDKIT=1
docker build -f /path/to/custom.Dockerfile ...
your .dockerignore file should be at
/path/to/custom.Dockerfile.dockerignore
At the moment, there is no way to do this. There is a lengthy discussion about adding an --ignore flag to Docker to provide the ignore file to use - please see here.
The options you have at the moment are mostly ugly:
Split your project into subdirectories that each have their own Dockerfile and .dockerignore, which might not work in your case.
Create a script that copies the relevant files into a temporary directory and run the Docker build there.
Adding the cleaned tests as a volume mount to the container could be an option here. After you build the image, if running it for testing, mount the source code containing the tests on top of the cleaned up code.
services:
tests:
image: my-clean-image
volumes:
- '../app:/opt/app' # Add removed tests
I've tried activating the DOCKER_BUILDKIT as suggested by #thisismydesign, but I ran into other problems (outside the scope of this question).
As an alternative, I'm creating an intermediary tar by using the -T flag which takes a txt file containing the files to be included in my tar, so it's not so different than a whitelist .dockerignore.
I export this tar and pipe it to the docker build command, and specify my docker file, which can live anywhere in my file hierarchy. In the end it looks like this:
tar -czh -T files-to-include.txt | docker build -f path/to/Dockerfile -
Another option is to have a further build process that includes the tests. The way I do it is this:
If the tests are unit tests then I create a new Docker image that is derived from the main project image; I just stick a FROM at the top, and then ADD the tests, plus any required tools (in my case, mocha, chai and so on). This new 'testing' image now contains both the tests and the original source to be tested. It can then simply be run as is or it can be run in 'watch mode' with volumes mapped to your source and test directories on the host.
If the tests are integration tests--for example the primary image might be a GraphQL server--then the image I create is self-contained, i.e., is not derived from the primary image (it still contains the tests and tools, of course). My tests use environment variables to tell them where to find the endpoint that needs testing, and it's easy enough to get Docker Compose to bring up both a container using the primary image, and another container using the integration testing image, and set the environment variables so that the test suite knows what to test.
Sadly it isn't currently possible to point to a specific file to use for .dockerignore, so we generate it in our build script based on the target/platform/image. As a docker enthusiast it's a sad and embarrassing workaround.