Intended use of appProject in vscode docker-build task - docker

Some context: I'm learning to use docker. I have a manually written Dockerfile and a docker-compose.yml file but when I stand up the container and the netcoreapp3.1 Kestrel web app contained therein, there's a problem loading the SSL certificate. The cert file isn't visible to Kestrel for reasons yet unknown to me.
In the process of fathoming this, I discovered VS Code has support for docker-build tasks. I've excerpted the interesting portion of the docs below.
The part that confuses me is the mandatory appProject property. My Dockerfile pulls source from git; several related projects. Then it compiles it and does various other things. Why do I need to specify an app project in the task when that sort of thing is defined by the Dockerfile? I strongly suspect I have failed to understand how these tasks are intended to be used — something the docs don't really address.
Could someone please explain the intended mode of use and the relevance of these settings? Preferably with examples.
Docker build task
The docker-build task builds Docker images using the Docker command line (CLI). The task can be used by itself, or as part of a chain of tasks to run and/or debug an application within a Docker container.
The most important configuration settings for the docker-build task are dockerBuild and platform:
The dockerBuild object specifies parameters for the Docker build command. Values specified by this object are applied directly to Docker build CLI invocation.
The platform property is a hint that changes how the docker-build task determines Docker build defaults.
Platform support#
While the docker-build task in tasks.json can be used to build any Docker image, the extension has explicit support (and simplified configuration) for Node.js, Python, and .NET Core
.NET Core (docker-build)#
Minimal configuration using defaults
When you build a .NET Core-based Docker image, you can omit the platform property and just set the netCore object (platform is implicitly set to netcore when netCore object is present). Note that appProject is a required property:
{
"version": "2.0.0",
"tasks": [
{
"label": "Build Node Image",
"type": "docker-build",
"netCore": {
"appProject": "${workspaceFolder}/project.csproj"
}
}
]
}
Platform defaults
For .NET Core-based images, the docker-build task infers the following options:
Property
Inferred Value
dockerBuild.context
The root workspace folder.
dockerBuild.dockerfile
The file Dockerfile in the root workspace folder.
dockerBuild.tag
The base name of the root workspace folder.

Related

How to set dependency-mapping binding in gradle bootBuildImage (Spring-boot 2.7.1, native)

I am using spring-boot 2.7.1 with native configuration as the guide follows in the link.
Spring native official doc
My problem is that when running bootBuildImage, the buildpack ["gcr.io/paketo-buildpacks/java-native-image:7.19.0"] is trying to download external dependency paketo-buildpacks/bellsoft-liberica from https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz which is not allowed by company firewall.
I then researched that you can configure dependeny-mapping bindings towards these dependencies within required buildpack, at-least using this pack cli guide.
But when using purely pack-cli the gradle bootBuildImage gets a bit irrelevant and then I have to use some external tool to fix the native docker container and image. And I would like to only use the bootBuildImage to map these dependency-bindings.
I found this binding function within Gradle bootBuildImage docs. but I am not sure what string it expects, if the path should be similar to pack-cli config or not, can't find any relevant info.
The provided image show the bootBuildImage config
bootBuildImage {
builder = 'docker.io/paketobuildpacks/builder:tiny'
runImage = 'docker.io/paketobuildpacks/run:tiny-cnb'
buildpacks = ['gcr.io/paketo-buildpacks/java-native-image']
binding("bindnings/bellsoft-jre-config:/platform/bindings/bellsoft-jre-config")
environment = [
"BP_NATIVE_IMAGE" : "true",
]
}
The dependency-mapping config contains 2 files:
The type file contains:
echo "dependency-mapping" >> type
The sha256 (bellsoft-liberica) file 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932 contains:
'echo "https://download.bell-sw.com/vm/22.3.0/bellsoft-liberica-vm-core-openjdk17.0.5+8-22.3.0+2-linux-amd64.tar.gz" >> 3dea0f7a9312c738d22b5e399b6ce9abe13b45b2bc2c04346beb941a94e8a932'
And yes I'm aware that this is the exact same url, but this is just to test that the binding config is correctly setup. Because if ok it should fail on untrusted certificate when downloading instead.
Currently the build fails with:
Caused by: org.springframework.boot.buildpack.platform.docker.transport.DockerEngineException: Docker API call to 'localhost/v1.24/containers/create' failed with status code 400 "Bad Request"
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:156)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.execute(HttpClientTransport.java:136)
at org.springframework.boot.buildpack.platform.docker.transport.HttpClientTransport.post(HttpClientTransport.java:108)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.createContainer(DockerApi.java:340)
at org.springframework.boot.buildpack.platform.docker.DockerApi$ContainerApi.create(DockerApi.java:331)
at org.springframework.boot.buildpack.platform.build.Lifecycle.createContainer(Lifecycle.java:237)
at org.springframework.boot.buildpack.platform.build.Lifecycle.run(Lifecycle.java:217)
at org.springframework.boot.buildpack.platform.build.Lifecycle.execute(Lifecycle.java:151)
at org.springframework.boot.buildpack.platform.build.Builder.executeLifecycle(Builder.java:157)
at org.springframework.boot.buildpack.platform.build.Builder.build(Builder.java:115)
at org.springframework.boot.gradle.tasks.bundling.BootBuildImage.buildImage(BootBuildImage.java:521)
Which i assume is caused by invalid binding config. But I can't find what is should be.
Paketo configuration (binding)
Dependency mapping bindings can be tricky. There are a number of things that have to be just right, or the buildpacks won't pick up the binding and won't map dependencies.
While there are talks of how we can change this in buildpacks to make swapping out dependencies easier, the short-term solution is to use binding-tool.
You can run bt dm -b paketo-buildpacks/bellsoft-liberica and it will go download the dependencies from the specified buildpack and generate the binding files for you.
It will by default download dependencies and write the bindings to $PWD/bindings but you can change that. For example, I like to put my dependencies in my home directory so I can share them across apps. Ex: SERVICE_BINDING_ROOT=~/.bt/bindings bt dm ..., or export SERVICE_BINDING_ROOT=~/.bt/bindings (or whatever command you run to set an env variable in your shell).
Once you have the bindings created, you just need to point your app to them. How you set the property differs between Maven & Gradle, but the value of the property is the same. It should be <local-path>:<container-path>.
The local path should be the full or relative path to where you created the bindings with bt dm. The container path should almost always be /platform/bindings. This maps your full set of bindings locally to the full set of bindings that the buildpacks will consume. In other words, put all of your bindings into the same directory locally, map that to /platform/bindings and the buildpacks will see everything.
For example with Gradle: binding("bindings/:/platform/bindings").
You can adjust the container path by setting SERVICE_BINDING_ROOT in the container as well, but it doesn't offer a lot of advantage.
You can also set multiple entries for bindings, so long as the paths are unique. So you could set binding("/home/user/.bt/bindings/foo:/platform/bindings/foo") and also binding("bindings/bar:/platform/bindings/bar"). That would let you take bindings from two different locations locally and map them into the /platform/bindings directory so both would be visible to buildpacks. This gives you more fine-grained control but as you can see becomes pretty verbose.
Details on configuring Maven and configuring Gradle for buildpacks can be found at those links.

Install dependencies in container using deps.edn

I inherited a clojure code base and I'm trying to containerize it for local development. The creators used deps.edn to manage the dependencies. However, I can't figure out what RUN command I should use to pre-install the dependencies for the project.
Currently, my entrypoint is the following ['clj', '-m', 'app'] which installs the dependencies every time I start the container.
How do I pre-install dependencies for a clojure project using a Docker RUN command?
Deps/CLI caching is described here. Generally speaking, dependencies are downloaded once and saved in a subdirectory of the project directory named
./.cpcache # "class path cache"
The ./.cpcache directory is analagous to the ~/.m2 cache directory used by Maven and related tools (e.g. Leiningen).
If you run the code locally, you should be able to copy the .cpcache dir with its cached dependencies into your Docker container. Then the dependencies don't need to be re-downloaded
for each startup of the Docker container.
See also the Deps/CLI overview.
P.S.
This template project is set up to run using both lein and Deps/CLI via the Kaocha tool. You may find the comparison helpful.
P.P.S.
You may find it easiest to run your code by building an uberjar file which contains all your code and all
dependencies in a single artifact. You can do this either using Leiningen or other tools such as depstar. You then invoke the application with a single command like:
java -jar demo-0.1.0-standalone.jar
Running this should do it:
clj -P

What is the .dockerfile extension?

Visual Studio Code (1.22.2) offers a file extension named .dockerfile in the the save dialog. What is a file with this extension? A Dockerfile is in all documentation and examples, that I've seen so far, only called Dockerfile.
If I enter Dockerfile as a file name, a file named Dockerfile.dockerfile is created.
It appears that "*.dockerfile" is simply an alternative to the conventional "Dockerfile" name. This is perhaps useful if you want to keep a collection of dockerfiles in the same directory. Note the -f/--file option in docker help build:
-f, --file string Name of the Dockerfile (Default is 'PATH/Dockerfile')
In other words, you are not required to use the name "Dockerfile", and the VSCode extension will correctly syntax-highlight any file ending in ".dockerfile".
Dockerfile
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. Docker images are the basis of containers. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes.More on
Dockerfile extension
A Dockerfile has no extension . if your using docker on docker on windows use notepad ++ to create a dockerfile while saving select “All type “ and save the file name as “Dockerfile”.
Mongodb/Dockerfile
Using the .dockerfile extension tells VSCode that the file is a DockerFile for code highlighting and linting
What worked for me was to save the file in VS Code as a Dockerfile. But, you need to remove the .dockerfile extension that VS Code puts on it before running the $docker-compose up command:
Even though VSCode can deal with extensionless files just fine, some major parts of the Windows operating system can't. Try double clicking a Dockerfile (without extension) in the Windows Explorer. You will always be asked which program you want to open it in because Windows can't map extensionless files to a default program.
My guess is that because of this problem, Microsoft would like for all files to have an extension and uses VSCode to nudge people towards using a file extension for Dockerfiles, ignoring the fact that this contradicts the de facto standard.
Dockerfile doesn't have any extensions.
As you can see from hte documentation, https://docs.docker.com/compose/gettingstarted/, it doesn't have any extensions.

Different docker compose override for custom Visual Studio configuration

We have a fairly complex system using docker-compose with a lot of different microservices. I want to be able to run an individual microservice via visual studio with one docker-compose configuration (Debug). Alternatively, I have another configuration (lets call it Debug2) where I want a slightly different docker-compose configuration.
Right now my "docker-compose.yml" file has the basics, and my "docker-compose.override.yml" has some development specific things. I made a "docker-compose.debug.yml". When I run the project in Debug mode, it launches all 3 of those files.
All is well so far, right?
Well, then I tried making a "docker-compose.debug2.yml". I added a new configuration to the project and solution called "Debug2". When I try running from Visual Studio in that mode, it only launches with the first 2 files, and doesn't attempt to use the "debug2" file at all.
Is the system hardcoded to only allow Debug and Release override files? Did I do something wrong or is there an oversight? Any other ideas?
When you are running the services via compose, are you passing the optional override file as well?
For example,
docker-compose -f docker-compose.debug.yml -f docker-compose.debug2.yml
By default, compose only looks for a docker-compose.overrides.yml to my knowledge. Therefore, you would have to pass it as an optional argument when you spin up your environment.
"By default, Compose reads two files, a docker-compose.yml and an optional docker-compose.override.yml file. By convention, the docker-compose.yml contains your base configuration. The override file, as its name implies, can contain configuration overrides for existing services or entirely new services."
For more information: https://docs.docker.com/compose/extends/
For anyone else coming across this issue you can find documentation here:
https://learn.microsoft.com/en-us/visualstudio/containers/docker-compose-properties?view=vs-2019
The two specific file names for "debug" and "release" are:
docker-compose.vs.debug.yml
docker-compose.vs.release.yml

iOS Google Tag Manager Integration: How to add multiple containers per App environment?

I completed the integration of the latest Google Tag Manager (v5) for iOS together with Firebase (https://developers.google.com/tag-manager/ios/v5/).
The big change here is that the default container file is not binary anymore, it is plain JSON.
The integration requires that you have a folder (not group!) with the name "container" inside your app workspace. Within this folder the container file should be located. This raises my issue: We have two different GTM Containers, one for the testing/development app and one for production.
By using a folder it is not possible for me to add a different container file and set target references.
I can not create an additional folder since GTM requires the folder on root level and with the exact name "container"
Does anybody have an idea how this can be solved?
Thanks,
Fahim
You should be able to configure an XCode "run script" build step that clears the container directory and copies the correct container into place.
Sample Run Script (if somebody has the same issue):
rm -vf ${SRCROOT}/root_folder/container/*
cp "${SRCROOT}/root_folder/target/test/GTM-XXXXX.json" "${SRCROOT}/root_folder/container/"
It is important that this copy job is done at first within Build Phases, otherwise some other precompiling stuff of GTM does not recognize the container.

Resources