I'm trying to do something a bit slick, and of course running into issues.
Specifically, I am using a Docker-based environment build system (Lando). I'm feeding it a Dockerfile that it then builds a cluster around, using docker-compose. Locally, this works fine. I also have it working great inside a GitHub Action to spin up a local-dev-identical environment for testing. Quite nice.
However, now I'm trying to expand the Dockerfile using the Dockerfile Plus extension. My Dockerfile looks like this:
# syntax = edrevo/dockerfile-plus
INCLUDE+ ./docker/prod/Dockerfile
COPY docker/dev/php.ini /usr/local/etc/php/conf.d/zzz-docker-custom.ini
# Other stuff here.
This works great for me locally, and I get the contents of docker/prod/Dockerfile included into my docker build.
When I run the same configuration inside a GitHub Actions workflow, I get a syntax error on the INCLUDE+ line, indicating that the extension is not being loaded. It uses BuildKit (according to the project page), which should be enabled by default on any recent Docker version, it says. Yet whatever is on GitHub is not using BuildKit. I've tried enabling it by setting the env vars explicitly (as specified on the Dockerfile+ project page), but it still doesn't seem to work.
How can I get Dockerfile+ working in GitHub Actions? Of note, I do not run the docker build command myself (it's run by docker-compose, using files generated on the fly by Lando), so I cannot modify that specific command. But I didn't need to locally anyway, so I don't know what the issue is.
Related
I inherited a clojure code base and I'm trying to containerize it for local development. The creators used deps.edn to manage the dependencies. However, I can't figure out what RUN command I should use to pre-install the dependencies for the project.
Currently, my entrypoint is the following ['clj', '-m', 'app'] which installs the dependencies every time I start the container.
How do I pre-install dependencies for a clojure project using a Docker RUN command?
Deps/CLI caching is described here. Generally speaking, dependencies are downloaded once and saved in a subdirectory of the project directory named
./.cpcache # "class path cache"
The ./.cpcache directory is analagous to the ~/.m2 cache directory used by Maven and related tools (e.g. Leiningen).
If you run the code locally, you should be able to copy the .cpcache dir with its cached dependencies into your Docker container. Then the dependencies don't need to be re-downloaded
for each startup of the Docker container.
See also the Deps/CLI overview.
P.S.
This template project is set up to run using both lein and Deps/CLI via the Kaocha tool. You may find the comparison helpful.
P.P.S.
You may find it easiest to run your code by building an uberjar file which contains all your code and all
dependencies in a single artifact. You can do this either using Leiningen or other tools such as depstar. You then invoke the application with a single command like:
java -jar demo-0.1.0-standalone.jar
Running this should do it:
clj -P
I have a BDD framework in Java which I am planning to dockerize. I am able to build and run that image as a whole. But what I want is:
To build 2 images, Image-1: Entire project (without feature files) & Image-2: Feature files.
Reason to do this is: My feature file will change often. I don't want to create my image again every time to install JDK and maven when there is only a change in the feature file.
What I expect is - Image-1 runs always as a container in background and when there is a change in feature files, I build Image-2 and start it as a container. This should trigger test by using already running container which has an entire dependency.
Reason to do this is: My feature file will change often. I don't want to create my image again every time to install JDK and maven when there is only a change in the feature file.
If you just want to meet above requirement, what you is just image inherit like next:
base/Dockerfile:
FROM ubuntu:16.04
# install JDK/MAVEN here
RUN xxx
Build a base image now:
$ docker build -t mybase:1 .
Then, for your application, use this base image:
app/Dockerfile:
FROM mybase:1
# add new feature files here
ADD ... ...
Everytime, your feature file change, you could rebuild your app Dockerfile and run a container base on this new built out image. But, As the JDK/MAVEN is in another base image (mybase:1) which was already built there, so they won't be built again.
I've personal ASP.NET Core project which scrapes data from the web using Selenium and Chromium and saves it in local sqlite database.
I want to be able to run this app in Docker image on my Synology NAS. Managed to create and run Docker image (on my Mac), it displays data from sqlite db correctly, but getting error when trying to scrape:
The chromedriver file does not exist in the current directory or in a directory on the PATH environment variable.
From my very limited understanding of Dockers in general, I understand that I need to add chromiumdriver inside the docker somehow.
I've searched a lot, went trough ~30 different examples and still can't get this to work.
Any help is appreciated!
You need to build a new image based on the existing one, in which you add the chromedriver binary. In other words you need to extend your current image.
So create a directory containing a Dockerfile and the chromedriver binary.
Your Dockerfile should look like this:
FROM your_existing_image_name:version
COPY chromedriver desired_path_inside_container
Then open a terminal inside this directory and execute:
docker build -t your_existing_image_name:version++ .
After that you should be able to start a container from the newly created image.
Some notes:
I have assumed that your existing image has been tagged with a version. If it is not the case then remove :version from Dockerfile
Similarly, remove :version++ from the build command. However, is a good practice to include versioning in your images.
I have not add any entrypoint assuming that you do not need to change the existing one.
My question is, how should my Erlang app reliably find a binary in the priv directory, not just in production; when installed properly, but during common test?
I realised today when I added a travis-ci configuration to an old Erlang app and pushed it to git-hub, that the process by which it works locally for me, is a little more fragile than I thought. The travis-ci build failed because it, not unreasonably, checked out my repo into a directory named after the repo, which is of the form erlang-APP. Locally my app is in a directory called APP-VSN though.
The result of this is that a call to code:lib_dir(APP) returns a correct result during the common test run locally, but if I rename my current directory to erlang-APP instead of APP-VSN (or just APP works too) my local build fails, just like it does for travis-ci, because code:lib_dir(APP) returns {error,bad_name}. The behaviour as though .. is added to the library path for rebar ct.
Renaming my github repo from erlang-APP to APP resolves the travis-ci build failure... but knowing the build tests only pass depending on the name of the directory the repo is checked out into doesn't sit right with me.
One way could be to use a soft link (either in the repo under version control, or created when initializing the tests), and make your Erlang code path go via the link. E.g., "./APP" -> ".", or "./lib/APP" -> "..".
I'm building a Yocto image for a project but it's a long process. On my powerful dev machine it takes around 3 hours and can consume up to 100 GB of space.
The thing is that the final image is not "necessarily" the end goal; it's my application that runs on top of it that is important. As such, the yocto recipes don't change much, but my application does.
I would like to run continuous integration (CI) for my app and even continuous delivery (CD). But both are quite hard for now because of the size of the yocto build.
Since the build does not change much, I though of "caching" it in some way and use it for my application's CI/CD and I though of Docker. That would be quite interesting as I could maintain that image and share it with colleagues who need to work on the project and use it in CI/CD.
Could a custom Docker image be built for that kind of use?
Would it be possible to build such an image completely offline? I don't want to have to upload the 100GB and have to re-download it on build machines...
Thanks!
1. Yes.
I've used docker to build Yocto images for many different reasons, always with positive results.
2. Yes, with some work.
You want to take advantage of the fact that Yocto caches all the stuff you need to do your build in what it calls "Shared State Cache". This is normally located in your build directory under ${BUILDDIR}/sstate-cache, and it contains exactly what you are looking for in this case. There are a couple of options for how to get these files to your build machines.
Option 1 is using sstate mirrors:
This isn't completely offline, but lets you download a much smaller cache and build from that cache, rather than from source.
Here's what's in my local.conf file:
SSTATE_MIRRORS ?= "\
file://.* http://my.shared-computer.com/some-folder/PATH"
Don't forget the PATH at the end. That is required. The build system substitutes the correct path within the directory structure.
Option 2 lets you keep a local copy of your sstate-cache and build from that locally.
In your dockerfile, create the sstate-cache directory (location isn't important here, I like /opt for my purposes):
RUN mkdir -p /opt/yocto/sstate-cache
Then be sure to bindmount these directories when you run your build in order to preserve the contents, like this:
docker run ... -v /place/to/save/cache:/opt/yocto/sstate-cache
Edit the local.conf in your build directory so that it points at these folders:
SSTATE_DIR ?= "/opt/yocto/sstate-cache"
In this way, you can get your cache onto your build machines in whatever way is best for you (scp, nfs, sneakernet).
Hope this helps!