How to ask sbt to only fetch dependencies, without compiling? - docker

Is there a way to only download the dependencies but do not compile source.
I am asking because I am trying to build a Docker build environment for my bigger project.
The Idear is that during docker build I clone the project, download all dependencies and then delete the code.
Then use docker run -v to mount the frequently changing code into the docker container and start compiling the project.
Currently I just compile the code during build and then compile it again on run. The problem ist that when a dependencie changes I have to build from scratch and that takes a long time.

Run sbt's update command. Dependencies will be resolved and retrieved.

Related

How to generate runtimeconfig.json for dependency with dotnet publish

I'm building a .Net Core 2.1 library. It has a runnable dependency, which I need to start first.
However dotnet publish generates .runtimeconfig.json only for my library and not for my dependency.
Is there a way to force dotnet to generate .runtimeconfig.json for a dependency?
I have tried:
<PropertyGroup>
<GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
<GenerateDependencyFile>true</GenerateDependencyFile>
</PropertyGroup>
but it did not help.
As a workaround I can copy my app .runtimeconfig.json to the dependency .runtimeconfig.json but it seems hacky. Is there a better option?
I have prepared the test case, if anyone wants to take a crack at it:
git clone https://github.com/eduard93/gerenate-runtimeconfig.json.git
cd gerenate-runtimeconfig.json
docker build --tag mbs .
docker run --name mbs mbs
docker rm mbs --force
If you remove the comment from Line 24 in the Dockerfile with the copy workaround the image would run successfully.
Without workaround I get this error:
A fatal error was encountered. The library 'libhostpolicy.so' required to execute the application was not found in '/app/'.
Failed to run as a self-contained app. If this should be a framework-dependent app, add the /app/IRISGatewayCore21.runtimeconfig.json file specifying the appropriate framework.

How to require() a dependency that doesn't exist yet with Webpack?

Thanks for your attention, I've been facing this issue for a couple of days.
I've got a React/Express project from Create React App inside a Docker container.
Basically, I've got some JSON files that I need to get at docker runtime because I don't want to have different docker images, I'd like to have a single one.
When I retrieve the files and then run npm run build at docker runtime my code works fine.
But, the problem is that I want to run npm run build at docker build time. And my code is not being able to find the JSON files because they were not bundled by webpack.
This is the current way that I'm using to load the files
const artifact = require(`./${path.join(config.files.dir, `${file}.json`)}`);
How can I load these files after the webpack build?
Currently, I don't have any webpack config files.
Thanks in advance.

Coverity scan while building in Docker container

I have a custom Docker container in which I perform build and test of a project. It is somehow integrated with Travis CI. Now I want to run the Coverity scan analysis from within the Travis CI as well, but the tricky part is (if I understand the Coverity docs correctly), that I need to run the build. The build, however, runs in the container.
Now, according to the cov-build --help
The cov-build or cov-build-sbox command intercepts all calls to the
compiler invoked by the build system and captures source code from the
file system.
What I've tried:
cov-build --dir=./cov docker exec -ti container_name sh -c "<custom build commands>"
With this approach, however, Coverity apparently does not catch the calls to the compiler (it is quite understandable considering Docker philosophy) and emits no files
What I do not want (at least while there is hope for a better solution):
to install locally all the necessary stuff to build in the container
only to be able to run Coverity scan.
to run cov-build from within the container, since:
I believe this would increase the docker image size significantly
I use Travis CI addon for the Coverity scan and this would
complicate things a lot.
The Travis CI part just FWIW, tried all that locally and it doesn't work either.
I am thrilled for any suggestions to the problem. Thank you.
Okay, I sort of solved the issue.
I downloaded and modified ( just a few modification to fit my
environment ) the script that Travis uses to download and run Coverity
scan.
Then I installed Coverity to the host machine (in my case Travis
CI machine).
I ran the docker container and mounted the directory where the
Coverity is installed using docker run -dit -v <coverity-dir>:<container-dir>:ro .... This way I avoided increasing the docker image size.
Executed the cov-build command and uploaded the analysis using
another part of the script directly from docker container.
Hope this helps someone struggling with similar issue.
If you're amenable to adjusting your build, you can change your "compiler" to be cov-translate <args> --run-compile <original compiler command line>. This is effectively what cov-build does under the hood (minus the run-compile since your compiler is already running), and should result in a build capture.
Here is the solution I use:
In "script", "after_script" or another phase in Travis job's lifecycle you want
Download coverity tool archive using wget (the complete Command to use can be found in your coverity scan account)
Untar the archive into a coverity_tool directory
Start your docker container as usual without needing to mount coverity_tool directory as a volume (in case you've created coverity_tool inside the directory from where the docker container is started)
Build the project using cov-build tool inside docker
Archive the generated cov-int directory
Send the result to coverity using curl command
Step 6 should be feasible inside the container but I usually do it outside.
Also don't forget the COVERITY_SCAN_TOKEN to be encrypted and exported as an environment variable.
A concrete example is often more understandable than a long text; here is a commit that applies above steps to build and send results to coverity scan:
https://github.com/BoubacarDiene/NetworkService/commit/960d4633d7ec786d471fc62efb85afb5af2bed7c

Installing Git Release in Docker

If I want to install code from a release version in Github in Docker, how can I do that taking up the least space possible in the image? Currently, I've done something like:
RUN wget https://github.com/some/repo/archive/v1.5.1.tar.gz¬
RUN tar -xvzf v1.5.1.tar.gz¬
WORKDIR /unzipped-1.5.1/¬
RUN make; make install
Issue here is the final image will have the downloaded tar, the unzipped version, and everything that gets created during make. I don't need the vast majority of this. How do I install my library in my image without keeping all of this extra data?
This is the textbook definition of the problem that the docker multi-stage build aims to solve.
The idea is to use a separate build with the dependencies and use that docker image to build the final product.
Note that this is available only in the new versions of Docker (17.05 onwards).

NPM: Only install missing - how to speed up npm install

I have a lot of devdepencencies in my npm script. npm install takes a few minutes the first time, that's ok.
But since I'm integrating with TFS build server, it only needs to npm install once. After that, npm install is just wasting time because it takes 2-3 minutes to just determin the packages are already installed. Also, it seems to always reinstall the packages with -g global flag, even when existing.
How can I make it check if packages exist, and if so, skip npm install?
You can use npm-cache as an alternative way if you use on-premise build agents for build.
It is useful for build processes that run [npm|bower|composer|jspm]
install every time as part of their build process. Since dependencies
don't change often, this often means slower build times. npm-cache
helps alleviate this problem by caching previously installed
dependencies on the build machine. npm-cache can be a drop-in
replacement for any build script that runs [npm|bower|composer|jspm]
install.
How it Works
When you run npm-cache install [npm|bower|jspm|composer], it first
looks for package.json, bower.json, or composer.json in the current
working directory depending on which dependency manager is requested.
It then calculates the MD5 hash of the configuration file and looks
for a filed named .tar.gz in the cache directory ($HOME/.package_cache
by default). If the file does not exist, npm-cache uses the system's
installed dependency manager to install the dependencies. Once the
dependencies are installed, npm-cache tars the newly downloaded
dependencies and stores them in the cache directory. The next time
npm-cache runs and sees the same config file, it will find the tarball
in the cache directory and untar the dependencies in the current
working directory.
And you can also try with npm-install-missing.
However, if you are using VSTS Hosted Build Agent, then you cannot do this since every time you queue a build with Hosted Build Agent, a clean build agent is assigned for the build. That means there is no dependency package installed on the agent. You need to perform a complete npm install.

Resources