I inherited a clojure code base and I'm trying to containerize it for local development. The creators used deps.edn to manage the dependencies. However, I can't figure out what RUN command I should use to pre-install the dependencies for the project.
Currently, my entrypoint is the following ['clj', '-m', 'app'] which installs the dependencies every time I start the container.
How do I pre-install dependencies for a clojure project using a Docker RUN command?
Deps/CLI caching is described here. Generally speaking, dependencies are downloaded once and saved in a subdirectory of the project directory named
./.cpcache # "class path cache"
The ./.cpcache directory is analagous to the ~/.m2 cache directory used by Maven and related tools (e.g. Leiningen).
If you run the code locally, you should be able to copy the .cpcache dir with its cached dependencies into your Docker container. Then the dependencies don't need to be re-downloaded
for each startup of the Docker container.
See also the Deps/CLI overview.
P.S.
This template project is set up to run using both lein and Deps/CLI via the Kaocha tool. You may find the comparison helpful.
P.P.S.
You may find it easiest to run your code by building an uberjar file which contains all your code and all
dependencies in a single artifact. You can do this either using Leiningen or other tools such as depstar. You then invoke the application with a single command like:
java -jar demo-0.1.0-standalone.jar
Running this should do it:
clj -P
Related
Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.
I am trying to fix a bug in a python package for my project that runs inside a docker container.
The package I am talking about is django-allauth and I found a small with one of the providers.
I know that I need to copy the edited package into the container and install it like this:
COPY folder/with/package /somefolder_inside_container/
RUN pip install -e /somefolder_inside_container/
The root folder of this project on github contains a lot of files and I am not sure if I should include them for copying to make sure the package installs. Do i need to copy anything besides the 'allauth' folder that contains all the models/views/etc. ?
I am trying to deploy a game server on a .NET 5.0 Docker container for Linux deployment. My current Dockerfile looks like this, and it has been working fine so far. My code is packaged into a dll that is in the Darkrift_Server_Local/Plugins folder. Darkrift.Server.Console.dll is a third-party networking wrapper that loads the plugin and runs my logic:
FROM mcr.microsoft.com/dotnet/runtime:5.0
COPY DarkRift_Server_Local/ App/
EXPOSE 4296/udp
EXPOSE 4296/tcp
WORKDIR /App
ENTRYPOINT ["dotnet", "./Lib/DarkRift.Server.Console.dll"]
Now I want to add a pre-trained ONNX model to my server for AI opponents to use. I have added the Microsoft.ML.OnnxRuntime Nuget package to my project, and my code works when I run it on my Windows 10 (using a powershell to execute Darkrift.Server.Console.dll).
However, I am not able to make this work in the Docker container, because it is unable to locate the onnxruntime.dll file. The specific error is below:
System.TypeInitializationException: The type initializer for
'Microsoft.ML.OnnxRuntime.NativeMethods' threw an exception.
---> System.DllNotFoundException: Unable to load shared library 'onnxruntime' or one of its dependencies.
I've tried copying the onnxruntime.dll file, generated during the build, to various locations in the Darkrift_Server_Local folder, but it has not been working. I'm rather new to Docker and have gotten back to .NET after several years, so having a little trouble figuring out how this is supposed to work. Would appreciate any direction on this.
EDIT: In response to the comment. I'm building the project using VS2019 with Release/Any CPU. The package inclusion code is as follows:
<ItemGroup>
<PackageReference Include="com.playfab.csharpgsdk" Version="0.10.200325" />
<PackageReference Include="Microsoft.ML.OnnxRuntime" Version="1.7.0" />
</ItemGroup>
I have also tried using dotnet publish -c Release and copying over the runtimes folder that is created in the publish folder to various locations in my Darkrift_Server_Local folder - should have mentioned that.
Since you are compiling on Windows, but using Linux container, you have to make sure that you are publishing for the correct runtime.
First, publish with dotnet publish -r linux-x64. You may want to specific -o <path> to make it easier to find.
You can then copy those inside your image with your Dockerfile.
Alternatively, you could build/publish in your Dockerfile itself to avoid platform issues like this.
I am trying to migrate a huge project having visual studio and maven projects to bazel. I need to access our in house maven server which is encrypted. To get access I need the load the maven_jar skylark extension since the default impl does not support encryption (get error 401). using the extension leads to a lot of troubles, like:
ERROR: BUILD:4:1: no such package '#org_bouncycastle_bcpkix_jdk15on//jar': Traceback (most recent call last):
File ".../external/bazel_tools/tools/build_defs/repo/maven_rules.bzl", line 280
_maven_artifact_impl(ctx, "jar", _maven_jar_build_file_te...)
File ".../external/bazel_tools/tools/build_defs/repo/maven_rules.bzl", line 248, in _maven_artifact_impl
fail(("%s: Failed to create dirs in e...))
org_bouncycastle_bcpkix_jdk15on: Failed to create dirs in execution root.
The main issue seems to be the shell that needs to be provided to bazel in BAZEL_SH environment variables:
I am working under windows
I am using bazel 0.23.2
bazel seems to run a bash command using "bash" directly and not the one provided by env variable.
I got a ubuntu shell installed in windows. bazel was using everything from ubuntu, especially when using maven (settings.xml was using from ubuntu ~/.m2 and not from windows user)
after uninstalling ubuntu and making sure that bash in a cmd ends up in "command not found" I also removed the BAZEL_SH env var and bazel throws the message above
after setting the BAZEL_SH variable again it fails with the same error message
I am assuming that bazel gets a bash from somewhere or is ignoring the env variable. My questions are:
1. How to setup a correct shell?
2. Is BAZEL_SH needed when using current version?
For me the doc at bazel website about setup is outdated.
Cheers
Please consider using rules_jvm_external to manage your Maven dependencies. It supports both Windows and private repositories using HTTP Basic Authentication.
For me the doc at bazel website about setup is outdated.
The Bazel team is aware of this and will be updating our docs shortly.
I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.