Is there a reference for mapping IotEdge module createOptions in deployment manifest to Docker container create options - azure-iot-edge

The title pretty much sums it up.
I'm looking for a reference to the mapping used in the following createOptions portion of the IoTEdge Deployment Manifest:
"modules": {
"MyCoolModule": {
"settings": {
"image": "mycoolimage.registry.example.com:latest",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/tmp/.X11-unix:/tmp/.X11-unix\"],\"LogConfig\":{\"Type\":\"json-file\",\"Config\":{\"max-size\": \"100m\",\"max-file\":\"2000\"}}}}"
},

The module’s createOptions is the Docker ContainerCreate structure. We do inject some additional information as part of the module configuration, but most createOptions are passed to the container runtime as-is.
Here are some options to get the createOptions you want:
Use IoT Edge tooling like VS Code with IoT Edge extension and the deployment.template.json to have it do the escaping for you. It becomes much more readable and even includes auto-complete when using the extension.
Start your development by just running a container with commands like docker run yourContainer and once you're happy with it, inspect your container with docker inspect yourContainer. This will give you’re the docker run options in the Json format.
Look at the Docker API here:
https://docs.docker.com/engine/api/v1.40/#operation/ContainerCreate

Related

Docker local registry - Image naming [duplicate]

By default, if I issue command:
sudo docker pull ruby:2.2.1
it will pull from the docker.io offical site by default.
Pulling repository docker.io/library/ruby
How do I change it to my private registry. That means if I issue
sudo docker pull ruby:2.2.1
it will pull from my own private registry, the output is something like:
Pulling repository my_private.registry:port/library/ruby
UPDATE: Following your comment, it is not currently possible to change the default registry, see this issue for more info.
You should be able to do this, substituting the host and port to your own:
docker pull localhost:5000/registry-demo
If the server is remote/has auth you may need to log into the server with:
docker login https://<YOUR-DOMAIN>:8080
Then running:
docker pull <YOUR-DOMAIN>:8080/test-image
There is the use case of a mirror of Docker Hub (such as Artifactory or a custom one), which I haven't seen mentioned here. This is one of the most valid cases where changing the default registry is needed.
Luckily, Docker (at least version 19.03.3) allows you to set a mirror (tested in Docker CE). I don't know if this will work with additional images pushed to that mirror that aren't on Docker Hub, but I do know it will use the mirror instead. Docker documentation: https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon.
Essentially, you need to add "registry-mirrors": [] to the /etc/docker/daemon.json configuration file. So if you have a mirror hosted at https://my-docker-repo.my.company.com, your /etc/docker/daemon.json should contain:
{
"registry-mirrors": ["https://my-docker-repo-mirror.my.company.com"]
}
Afterwards, restart the Docker daemon. Now if you do a docker pull postgres:12, Docker should fetch the image from the mirror instead of directly from Docker Hub. This is much better than prepending all images with my-docker-repo.my.company.com
It turns out this is actually possible, but not using the genuine Docker CE or EE version.
You can either use Red Hat's fork of docker with the '--add-registry' flag or you can build docker from source yourself with registry/config.go modified to use your own hard-coded default registry namespace/index.
The short answer to this is you don't, or at least you really shouldn't.
Yes, there are some container runtimes that allow you to change the default namespace, specifically those from RedHat. However, RedHat now regrets this functionality and discourages customers from using it. Docker has also refused to support this.
The reason this is so problematic is because is results in an ambiguous namespace of images. The same command run on two different machines could pull different images depending on what registry they are configured to use. Since compose files, helm templates, and other ways of running containers are shared between machines, this actually introduces a security vulnerability.
An attacker could squat on well known image names in registries other than Docker Hub with the hopes that a user may change their default configuration and accidentally run their image instead of the one from Hub. It would be trivial to create a fork of a tool like Jenkins, push the image to other registries, but with some code that sends all the credentials loaded into Jenkins out to an attacker server. We've even seen this causing security vulnerability reports this year for other package managers like PyPI, NPM, and RubyGems.
Instead, the direction of container runtimes like containerd is to make all image names fully qualified, removing the Docker Hub automatic expansion (tooling on top of containerd like Docker still apply the default expansion, so I doubt this is going away any time soon, if ever).
Docker does allow you to define registry mirrors for Docker Hub that it will query first before querying Hub, however this assumes everything is still within the same namespace and the mirror is just a copy of upstream images, not a different namespace of images. The TL;DR on how to set that up is the following in the /etc/docker/daemon.json and then systemctl reload docker:
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
For most, this is a non-issue (this issue to me is the docker engine doesn't have an option to mirror non-Hub registries). The image name is defined in a configuration file, or a script, and so typing it once in that file is easy enough. And with tooling like compose files and Helm templates, the registry can be turned into a variable to allow organizations to explicitly pull images for their deploy from a configurable registry name.
if you are using the fedora distro, you can change the file
/etc/containers/registries.conf
Adding domain docker.io
Docker official position is explained in issue #11815 :
Issue 11815: Allow to specify default registries used in pull command
Resolution:
Like pointed out earlier (#11815), this would fragment the namespace, and hurt the community pretty badly, making dockerfiles no longer portable.
[the Maintainer] will close this for this reason.
Red Hat had a specific implementation that allowed it (see anwser, but it was refused by Docker upstream projet). It relied on --add-registry argument, which was set in /etc/containers/registries.conf on RHEL/CentOS 7.
EDIT:
Actually, Docker supports registry mirrors (also known as "Run a Registry as a pull-through cache").
https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon
It seems it won't be supported due to the fragmentation it would create within the community (i.e. two users would get different images pulling ubuntu:latest). You simply have to add the host in front of the image name. See this github issue to join the discussion.
(Note, this is not intended as an opinionated comment, just a very short summary of the discussion that can be followed in the mentioned github issue.)
I tried to add the following options in the /etc/docker/daemon.json.
(I used CentOS7)
"add-registry": ["192.168.100.100:5001"],
"block-registry": ["docker.io"],
after that, restarted docker daemon.
And it's working without docker.io.
I hope this someone will be helpful.
Earlier this could be achieved using DOCKER_OPTS in the /etc/default/docker config file which worked on Ubuntu 14:04 and had some issues on Ubuntu 15:04. Not sure if this has been fixed.
The below line needs to go into the file /etc/default/docker on the host which runs the docker daemon. The change points to the private registry is installed in your local network. Note: you would require to restart the docker service followed with this change.
DOCKER_OPTS="--insecure-registry <priv registry hostname/ip>:<port>"
I'm adding up to the original answer given by Guy which is still valid today (soon 2020).
Overriding the default docker registry, like you would do with maven, is actually not a good practice.
When using maven, you pull artifacts from Maven Central Repository through your local repository management system that will act as a proxy. These artifacts are plain, raw libs (jars) and it is quite unlikely that you will push jars with the same name.
On the other hand, docker images are fully operational, runnable, environments, and it makes total sens to pull an image from the Docker Hub, modify it and push this image in your local registry management system with the same name, because it is exactly what its name says it is, just in your enterprise context. In this case, the only distinction between the two images would precisely be its path!!
Therefore the need to set the following rule: the prefix of an image indicates its origin; by default if an image does not have a prefix, it is pulled from Docker Hub.
Didn't see the answer for MacOS, so want to add here:
2 Method as below:
Option 1 (Through Docker Desktop GUI):
Preference -> Docker Engine -> Edit file -> Apply and Restart
Option 2:
Directly edit the file ~/.docker/daemon.json
Haven't tried, but maybe hijacking the DNS resolution process by adding a line in /etc/hosts for hub.docker.com or something similar (docker.io?) could work?

is docker has config to replace image`s repository [duplicate]

By default, if I issue command:
sudo docker pull ruby:2.2.1
it will pull from the docker.io offical site by default.
Pulling repository docker.io/library/ruby
How do I change it to my private registry. That means if I issue
sudo docker pull ruby:2.2.1
it will pull from my own private registry, the output is something like:
Pulling repository my_private.registry:port/library/ruby
UPDATE: Following your comment, it is not currently possible to change the default registry, see this issue for more info.
You should be able to do this, substituting the host and port to your own:
docker pull localhost:5000/registry-demo
If the server is remote/has auth you may need to log into the server with:
docker login https://<YOUR-DOMAIN>:8080
Then running:
docker pull <YOUR-DOMAIN>:8080/test-image
There is the use case of a mirror of Docker Hub (such as Artifactory or a custom one), which I haven't seen mentioned here. This is one of the most valid cases where changing the default registry is needed.
Luckily, Docker (at least version 19.03.3) allows you to set a mirror (tested in Docker CE). I don't know if this will work with additional images pushed to that mirror that aren't on Docker Hub, but I do know it will use the mirror instead. Docker documentation: https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon.
Essentially, you need to add "registry-mirrors": [] to the /etc/docker/daemon.json configuration file. So if you have a mirror hosted at https://my-docker-repo.my.company.com, your /etc/docker/daemon.json should contain:
{
"registry-mirrors": ["https://my-docker-repo-mirror.my.company.com"]
}
Afterwards, restart the Docker daemon. Now if you do a docker pull postgres:12, Docker should fetch the image from the mirror instead of directly from Docker Hub. This is much better than prepending all images with my-docker-repo.my.company.com
It turns out this is actually possible, but not using the genuine Docker CE or EE version.
You can either use Red Hat's fork of docker with the '--add-registry' flag or you can build docker from source yourself with registry/config.go modified to use your own hard-coded default registry namespace/index.
The short answer to this is you don't, or at least you really shouldn't.
Yes, there are some container runtimes that allow you to change the default namespace, specifically those from RedHat. However, RedHat now regrets this functionality and discourages customers from using it. Docker has also refused to support this.
The reason this is so problematic is because is results in an ambiguous namespace of images. The same command run on two different machines could pull different images depending on what registry they are configured to use. Since compose files, helm templates, and other ways of running containers are shared between machines, this actually introduces a security vulnerability.
An attacker could squat on well known image names in registries other than Docker Hub with the hopes that a user may change their default configuration and accidentally run their image instead of the one from Hub. It would be trivial to create a fork of a tool like Jenkins, push the image to other registries, but with some code that sends all the credentials loaded into Jenkins out to an attacker server. We've even seen this causing security vulnerability reports this year for other package managers like PyPI, NPM, and RubyGems.
Instead, the direction of container runtimes like containerd is to make all image names fully qualified, removing the Docker Hub automatic expansion (tooling on top of containerd like Docker still apply the default expansion, so I doubt this is going away any time soon, if ever).
Docker does allow you to define registry mirrors for Docker Hub that it will query first before querying Hub, however this assumes everything is still within the same namespace and the mirror is just a copy of upstream images, not a different namespace of images. The TL;DR on how to set that up is the following in the /etc/docker/daemon.json and then systemctl reload docker:
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
For most, this is a non-issue (this issue to me is the docker engine doesn't have an option to mirror non-Hub registries). The image name is defined in a configuration file, or a script, and so typing it once in that file is easy enough. And with tooling like compose files and Helm templates, the registry can be turned into a variable to allow organizations to explicitly pull images for their deploy from a configurable registry name.
if you are using the fedora distro, you can change the file
/etc/containers/registries.conf
Adding domain docker.io
Docker official position is explained in issue #11815 :
Issue 11815: Allow to specify default registries used in pull command
Resolution:
Like pointed out earlier (#11815), this would fragment the namespace, and hurt the community pretty badly, making dockerfiles no longer portable.
[the Maintainer] will close this for this reason.
Red Hat had a specific implementation that allowed it (see anwser, but it was refused by Docker upstream projet). It relied on --add-registry argument, which was set in /etc/containers/registries.conf on RHEL/CentOS 7.
EDIT:
Actually, Docker supports registry mirrors (also known as "Run a Registry as a pull-through cache").
https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon
It seems it won't be supported due to the fragmentation it would create within the community (i.e. two users would get different images pulling ubuntu:latest). You simply have to add the host in front of the image name. See this github issue to join the discussion.
(Note, this is not intended as an opinionated comment, just a very short summary of the discussion that can be followed in the mentioned github issue.)
I tried to add the following options in the /etc/docker/daemon.json.
(I used CentOS7)
"add-registry": ["192.168.100.100:5001"],
"block-registry": ["docker.io"],
after that, restarted docker daemon.
And it's working without docker.io.
I hope this someone will be helpful.
Earlier this could be achieved using DOCKER_OPTS in the /etc/default/docker config file which worked on Ubuntu 14:04 and had some issues on Ubuntu 15:04. Not sure if this has been fixed.
The below line needs to go into the file /etc/default/docker on the host which runs the docker daemon. The change points to the private registry is installed in your local network. Note: you would require to restart the docker service followed with this change.
DOCKER_OPTS="--insecure-registry <priv registry hostname/ip>:<port>"
I'm adding up to the original answer given by Guy which is still valid today (soon 2020).
Overriding the default docker registry, like you would do with maven, is actually not a good practice.
When using maven, you pull artifacts from Maven Central Repository through your local repository management system that will act as a proxy. These artifacts are plain, raw libs (jars) and it is quite unlikely that you will push jars with the same name.
On the other hand, docker images are fully operational, runnable, environments, and it makes total sens to pull an image from the Docker Hub, modify it and push this image in your local registry management system with the same name, because it is exactly what its name says it is, just in your enterprise context. In this case, the only distinction between the two images would precisely be its path!!
Therefore the need to set the following rule: the prefix of an image indicates its origin; by default if an image does not have a prefix, it is pulled from Docker Hub.
Didn't see the answer for MacOS, so want to add here:
2 Method as below:
Option 1 (Through Docker Desktop GUI):
Preference -> Docker Engine -> Edit file -> Apply and Restart
Option 2:
Directly edit the file ~/.docker/daemon.json
Haven't tried, but maybe hijacking the DNS resolution process by adding a line in /etc/hosts for hub.docker.com or something similar (docker.io?) could work?

How to pull image using HELM without URL [duplicate]

By default, if I issue command:
sudo docker pull ruby:2.2.1
it will pull from the docker.io offical site by default.
Pulling repository docker.io/library/ruby
How do I change it to my private registry. That means if I issue
sudo docker pull ruby:2.2.1
it will pull from my own private registry, the output is something like:
Pulling repository my_private.registry:port/library/ruby
UPDATE: Following your comment, it is not currently possible to change the default registry, see this issue for more info.
You should be able to do this, substituting the host and port to your own:
docker pull localhost:5000/registry-demo
If the server is remote/has auth you may need to log into the server with:
docker login https://<YOUR-DOMAIN>:8080
Then running:
docker pull <YOUR-DOMAIN>:8080/test-image
There is the use case of a mirror of Docker Hub (such as Artifactory or a custom one), which I haven't seen mentioned here. This is one of the most valid cases where changing the default registry is needed.
Luckily, Docker (at least version 19.03.3) allows you to set a mirror (tested in Docker CE). I don't know if this will work with additional images pushed to that mirror that aren't on Docker Hub, but I do know it will use the mirror instead. Docker documentation: https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon.
Essentially, you need to add "registry-mirrors": [] to the /etc/docker/daemon.json configuration file. So if you have a mirror hosted at https://my-docker-repo.my.company.com, your /etc/docker/daemon.json should contain:
{
"registry-mirrors": ["https://my-docker-repo-mirror.my.company.com"]
}
Afterwards, restart the Docker daemon. Now if you do a docker pull postgres:12, Docker should fetch the image from the mirror instead of directly from Docker Hub. This is much better than prepending all images with my-docker-repo.my.company.com
It turns out this is actually possible, but not using the genuine Docker CE or EE version.
You can either use Red Hat's fork of docker with the '--add-registry' flag or you can build docker from source yourself with registry/config.go modified to use your own hard-coded default registry namespace/index.
The short answer to this is you don't, or at least you really shouldn't.
Yes, there are some container runtimes that allow you to change the default namespace, specifically those from RedHat. However, RedHat now regrets this functionality and discourages customers from using it. Docker has also refused to support this.
The reason this is so problematic is because is results in an ambiguous namespace of images. The same command run on two different machines could pull different images depending on what registry they are configured to use. Since compose files, helm templates, and other ways of running containers are shared between machines, this actually introduces a security vulnerability.
An attacker could squat on well known image names in registries other than Docker Hub with the hopes that a user may change their default configuration and accidentally run their image instead of the one from Hub. It would be trivial to create a fork of a tool like Jenkins, push the image to other registries, but with some code that sends all the credentials loaded into Jenkins out to an attacker server. We've even seen this causing security vulnerability reports this year for other package managers like PyPI, NPM, and RubyGems.
Instead, the direction of container runtimes like containerd is to make all image names fully qualified, removing the Docker Hub automatic expansion (tooling on top of containerd like Docker still apply the default expansion, so I doubt this is going away any time soon, if ever).
Docker does allow you to define registry mirrors for Docker Hub that it will query first before querying Hub, however this assumes everything is still within the same namespace and the mirror is just a copy of upstream images, not a different namespace of images. The TL;DR on how to set that up is the following in the /etc/docker/daemon.json and then systemctl reload docker:
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
For most, this is a non-issue (this issue to me is the docker engine doesn't have an option to mirror non-Hub registries). The image name is defined in a configuration file, or a script, and so typing it once in that file is easy enough. And with tooling like compose files and Helm templates, the registry can be turned into a variable to allow organizations to explicitly pull images for their deploy from a configurable registry name.
if you are using the fedora distro, you can change the file
/etc/containers/registries.conf
Adding domain docker.io
Docker official position is explained in issue #11815 :
Issue 11815: Allow to specify default registries used in pull command
Resolution:
Like pointed out earlier (#11815), this would fragment the namespace, and hurt the community pretty badly, making dockerfiles no longer portable.
[the Maintainer] will close this for this reason.
Red Hat had a specific implementation that allowed it (see anwser, but it was refused by Docker upstream projet). It relied on --add-registry argument, which was set in /etc/containers/registries.conf on RHEL/CentOS 7.
EDIT:
Actually, Docker supports registry mirrors (also known as "Run a Registry as a pull-through cache").
https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon
It seems it won't be supported due to the fragmentation it would create within the community (i.e. two users would get different images pulling ubuntu:latest). You simply have to add the host in front of the image name. See this github issue to join the discussion.
(Note, this is not intended as an opinionated comment, just a very short summary of the discussion that can be followed in the mentioned github issue.)
I tried to add the following options in the /etc/docker/daemon.json.
(I used CentOS7)
"add-registry": ["192.168.100.100:5001"],
"block-registry": ["docker.io"],
after that, restarted docker daemon.
And it's working without docker.io.
I hope this someone will be helpful.
Earlier this could be achieved using DOCKER_OPTS in the /etc/default/docker config file which worked on Ubuntu 14:04 and had some issues on Ubuntu 15:04. Not sure if this has been fixed.
The below line needs to go into the file /etc/default/docker on the host which runs the docker daemon. The change points to the private registry is installed in your local network. Note: you would require to restart the docker service followed with this change.
DOCKER_OPTS="--insecure-registry <priv registry hostname/ip>:<port>"
I'm adding up to the original answer given by Guy which is still valid today (soon 2020).
Overriding the default docker registry, like you would do with maven, is actually not a good practice.
When using maven, you pull artifacts from Maven Central Repository through your local repository management system that will act as a proxy. These artifacts are plain, raw libs (jars) and it is quite unlikely that you will push jars with the same name.
On the other hand, docker images are fully operational, runnable, environments, and it makes total sens to pull an image from the Docker Hub, modify it and push this image in your local registry management system with the same name, because it is exactly what its name says it is, just in your enterprise context. In this case, the only distinction between the two images would precisely be its path!!
Therefore the need to set the following rule: the prefix of an image indicates its origin; by default if an image does not have a prefix, it is pulled from Docker Hub.
Didn't see the answer for MacOS, so want to add here:
2 Method as below:
Option 1 (Through Docker Desktop GUI):
Preference -> Docker Engine -> Edit file -> Apply and Restart
Option 2:
Directly edit the file ~/.docker/daemon.json
Haven't tried, but maybe hijacking the DNS resolution process by adding a line in /etc/hosts for hub.docker.com or something similar (docker.io?) could work?

Can I query a remote docker registry for image information (such as OS)?

Is there any command I can run against a docker registry (public and private) which can return the image operating system? Specifically, I'm looking to distinguish between Linux and Windows images, not between linux distros.
The reason is that we have a docker-based build system today for which we're trying to add support for Windows containers, and LCOW. In theory, Linux builds can take place on either Windows or Linux servers, so we want the tool to have the ability to automatically prepend sudo on the docker command and --platform on the pull/run commands (as well as some other things) when appropriate. However, this requires that we automatically detect the OS of the image. I have looked through the docker documentation and was unable to find any support for such a query, but perhaps I missed something.
As I'm writing this, I'm realizing that if the docker client could automatically deduce the OS of an image, they probably would have built this detection into the client rather than introduce the new --platform argument on all the various docker commands.
Deleted old answer because it was incorrect.
Project Atomic has a tool called "skopeo" that does what you want. Here's a link to the github repository
After you've installed the tool, you should simply be able to do:
$ skopeo inspect docker://docker.io/microsoft/nanoserver | jq '.Os'
"windows"
$ skopeo inspect docker://docker.io/library/ubuntu | jq '.Os'
"linux"
Background:
If you look at the docker image information available under /var/lib/docker/image/<storage-driver>/imagedb/content/sha265/<image-sha>, you'll find that it looks something like this:
{
"architecture": "amd64",
"config": {
...
},
"container": "6e8eb576ec0f7564a85c0fbd39824e0e91c031aa0019c56c5f992449e88d1142",
"container_config": {
...
},
"created": "2018-03-06T22:17:26.531075062Z",
"docker_version": "17.06.2-ce",
"history": [
...
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:a94e0d5a7c404d0e6fa15d8cd4010e69663bd8813b5117fbad71365a73656df9",
"sha256:88888b9b1b5b7bce5db41267e669e6da63ee95736cb904485f96f29be648bfda",
"sha256:52f389ea437ebf419d1c9754d0184b57edb45c951666ee86951d9f6afd26035e",
"sha256:52a7ea2bb533dc2a91614795760a67fb807561e8a588204c4858a300074c082b",
"sha256:db584c622b50c3b8f9b8b94c270cc5fe235e5f23ec4aacea8ce67a8c16e0fbad"
]
}
}
As you can see there's a field available called os, that says linux. I suspect if you look at that field for a windows image, it'll say windows. This is the information available via docker inspect.
The tricky bit is figuring out which API calls to make to docker.io to get this information. The docker hub/registry APIs are a little under documented and it's not obvious where this information comes from exactly. Providing this information in a straightforward way seems to be an open bug upstream:
Proposal: inspect image json on a registry

Is it possible to specify a Docker image build argument at pod creation time in Kubernetes?

I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.

Resources