Override parent image docker container command to do something after - docker

I want to change a user password and run a SQL script against a DB2 container image. How do I do whatever the parent image called for, but then run a few commands after that completed? I need this to run using docker compose because the database will be used to support an acceptance test. In my docker-compose.yml file, I have a command property, but I checked the container and do not see the result of the touch statement, so it never ran.
My docker-compose.yml file is:
version: "3.2"
services:
ssc-file-generator-db2-test:
container_name: "ssc-file-generator-db2-test"
image: ibmcom/db2:latest
command: /bin/bash -c "touch /command-run && echo \"db2inst1:db2inst1\" | chpasswd && su db2inst1 && db2 -tvf /db2-test-scaffolding/init.sql"
hostname: db2server
privileged: true
# entrypoint: ["/bin/sh -c ]
ports:
- 50100:50000
- 55100:55000
networks:
- back-tier
restart: "no"
volumes:
- db2-test-scaffolding:/db2-test-scaffolding
env_file:
- acceptance-run.environment
# ssc-file-generator:
# container_name: "ssc-file-generator_testing"
# image: ssc-file-generator
# depends_on: ["ssc-file-generator-db2-test]
# command:
# env_file: ["acceptance-run.environment"]
networks:
back-tier: {}
volumes:
db2-test-scaffolding:
driver: local
driver_opts:
o: bind
type: none
device: ./db2-test-scaffolding
acceptance-run.environment
BCUPLOAD_DATASOURCE_DIALECT=org.hibernate.dialect.DB2Dialect
BCUPLOAD_DATASOURCE_DRIVER=com.ibm.db2.jcc.DB2Driver
BCUPLOAD_DATASOURCE_PASSWORD=bluecost
BCUPLOAD_DATASOURCE_URL=jdbc:db2://localhost:50100/mydb:currentSchema=FILE_GENERATOR
BCUPLOAD_DATASOURCE_USERNAME=bluecost
B2INSTANCE=db2inst1
DB2INST1_PASSWORD=db2inst1
DBNAME=MYDB
DEBUG_SECRETS=true
file-generator.test.files.path=src/test/acceptance/resources/files/
# Needed for DB2 container
LICENSE=accept
The docker image is
ibmcom/db2:latest
For convenience, this is the docker inspect ibmcom/db2:latest
[
{
"Id": "sha256:e304e217603b80b31c989574081b2badf210b4466c7f74cf32087ee0a1ba6e04",
"RepoTags": [
"ibmcom/db2:latest"
],
"RepoDigests": [
"ibmcom/db2#sha256:77da4492bf18c49a1012aa6071a16aee0039dca9c0a2a492345b6b030714a54f"
],
"Parent": "",
"Comment": "",
"Created": "2021-03-29T18:54:36.94484751Z",
"Container": "e59bda8065b72a0e440d145d6d90ba77231a514e811e66651d4fa6da98a34910",
"ContainerConfig": {
"Hostname": "6125cd0dc6e6",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"50000/tcp": {},
"55000/tcp": {},
"60006/tcp": {},
"60007/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=oci",
"STORAGE_DIR=/database",
"HADR_SHARED_DIR=/hadr",
"DBPORT=50000",
"TSPORT=55000",
"SETUPDIR=/var/db2_setup",
"SETUPAREA=/tmp/setup",
"NOTVISIBLE=in users profile",
"LICENSE_NAME=db2dec.lic"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"ENTRYPOINT [\"/var/db2_setup/lib/setup_db2_instance.sh\"]"
],
"Image": "sha256:e65b35603167c75a86515ef4af101a539cbbdf561bcb9efd656d17b8d867c7da",
"Volumes": {
"/database": {},
"/hadr": {}
},
"WorkingDir": "",
"Entrypoint": [
"/var/db2_setup/lib/setup_db2_instance.sh"
],
"OnBuild": [],
"Labels": {
"architecture": "x86_64",
"build-date": "2021-03-10T06:09:00.139818",
"com.redhat.build-host": "cpt-1007.osbs.prod.upshift.rdu2.redhat.com",
"com.redhat.component": "ubi7-container",
"com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
"description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"distribution-scope": "public",
"io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"io.k8s.display-name": "Red Hat Universal Base Image 7",
"io.openshift.tags": "base rhel7",
"name": "ubi7",
"release": "338",
"summary": "Provides the latest release of the Red Hat Universal Base Image 7.",
"url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi7/images/7.9-338",
"vcs-ref": "a4e710a688a6374670ecdd56637c3f683d11cbe3",
"vcs-type": "git",
"vendor": "Red Hat, Inc.",
"version": "7.9"
}
},
"DockerVersion": "19.03.6",
"Author": "db2_download_and_go",
"Config": {
"Hostname": "6125cd0dc6e6",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"50000/tcp": {},
"55000/tcp": {},
"60006/tcp": {},
"60007/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=oci",
"STORAGE_DIR=/database",
"HADR_SHARED_DIR=/hadr",
"DBPORT=50000",
"TSPORT=55000",
"SETUPDIR=/var/db2_setup",
"SETUPAREA=/tmp/setup",
"NOTVISIBLE=in users profile",
"LICENSE_NAME=db2dec.lic"
],
"Cmd": null,
"Image": "sha256:e65b35603167c75a86515ef4af101a539cbbdf561bcb9efd656d17b8d867c7da",
"Volumes": {
"/database": {},
"/hadr": {}
},
"WorkingDir": "",
"Entrypoint": [
"/var/db2_setup/lib/setup_db2_instance.sh"
],
"OnBuild": [],
"Labels": {
"architecture": "x86_64",
"build-date": "2021-03-10T06:09:00.139818",
"com.redhat.build-host": "cpt-1007.osbs.prod.upshift.rdu2.redhat.com",
"com.redhat.component": "ubi7-container",
"com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
"description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"distribution-scope": "public",
"io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
"io.k8s.display-name": "Red Hat Universal Base Image 7",
"io.openshift.tags": "base rhel7",
"name": "ubi7",
"release": "338",
"summary": "Provides the latest release of the Red Hat Universal Base Image 7.",
"url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi7/images/7.9-338",
"vcs-ref": "a4e710a688a6374670ecdd56637c3f683d11cbe3",
"vcs-type": "git",
"vendor": "Red Hat, Inc.",
"version": "7.9"
}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 2778060115,
"VirtualSize": 2778060115,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/49d83ba2eb50cdbfc5a9e3a7b4baf907a9b4326aa0710689f602bb3cff01d820/diff:/var/lib/docker/overlay2/ba54659cc4ec10fa84edc49d5480ebe4897629f841d76ae79a4fb0c2edb791a5/diff:/var/lib/docker/overlay2/2238ae349d70686609b990b63c0066d6e51d94be59801a81c7f5b4d97da1fe02/diff:/var/lib/docker/overlay2/704708b72448f8a4750db3aabd43c12f23ad7e6d3f727aa5977bd7ac4db8e8cb/diff:/var/lib/docker/overlay2/1b47e1515517af553fd8b986c841e87d8ba813d53739344c9b7350ad36b54b0b/diff:/var/lib/docker/overlay2/0a580802a7096343aa5d8de5039cf5a011e66e481793230dced8769b024e5cd2/diff:/var/lib/docker/overlay2/4da91655770b0e94236ea8da2ea8ff503467161cf85473a32760f89b56d213ff/diff:/var/lib/docker/overlay2/401c640771a27c70f20abf5c48b0be0e2f42ed5b022f81f58ebc0810831283ea/diff:/var/lib/docker/overlay2/8985c59d1ab32b8d8eaf4c11890801cb228d47cc7437b3e9b4f585e7296e4b6a/diff:/var/lib/docker/overlay2/ec66f9872de7b5310bac2bd5fd59552574df56bb06dcd5dd61ff2b63002d77ed/diff:/var/lib/docker/overlay2/fcf40217c8477dcf4e5fafc8c83408c3c788f367ed67c78cb0bc312439674fcf/diff",
"MergedDir": "/var/lib/docker/overlay2/8bcf7bf60181d555a11fb8df79a28cb2f9d8737d28fe913a252694ba2165c1d1/merged",
"UpperDir": "/var/lib/docker/overlay2/8bcf7bf60181d555a11fb8df79a28cb2f9d8737d28fe913a252694ba2165c1d1/diff",
"WorkDir": "/var/lib/docker/overlay2/8bcf7bf60181d555a11fb8df79a28cb2f9d8737d28fe913a252694ba2165c1d1/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:87e96a33b6fb724886ccda863dcbf85aab1119d380dc8d60fc7eeace293fc3a8",
"sha256:7dfef4d05d0afc0383f5ebd8d9f3f7f7e17406f7e9e5744bead1a65e5ab47d0e",
"sha256:51a646f7fd864ded24db2d87aaef69767cec8cfa63117bdca1a80cc4e0a77329",
"sha256:9e2474c7feefaf8fe58cdb4d550edf725288c109f7842c819c734907406e9095",
"sha256:d4d38bb7d4b3e7ea2b17acce63dd4b9ed926c7c0bbe028393228caf8933a4482",
"sha256:4ec8c6264294fc505d796e17187c4c87099ff8f76ac8f337653e4643a9638d9e",
"sha256:84a0a1068d25a8fa7b0f3e966b0313d31bc9e7405484da2a9ebf0fe1ebaf40dc",
"sha256:956ab4664636dcce9d727ed0580f33ec510c8903ee827ce3ce72d4ba1184139b",
"sha256:55f8b1bcde6acbd521024e3d10ed4a3a3bdf567cfd029b1876bd646ff502270b",
"sha256:8c2496f1c442c3303273991e9cd5c4a5ffc0ab2ad7e2547976fe451095798390",
"sha256:583acd9a453ded660462a120737ffec2def4416a573c6ea7ed2b132e403d9c08",
"sha256:604c94797d42c86bfbc3d25e816a105b971805ae886bec8bc69bdae4ff20e1b6"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]

I solved the problem by debugging the bash script and learning that it offers a custom directory which by which it runs scripts. I believe these get run after the db2 service starts. While I suspect that I could have specified an entrypoint, it would not necessarily work well with the ibmcom/db2 container. Posted below is my docker compose file showing volume mounts which I used for this particular container.
Note I think this defines mounts in a "bind" format, I believe, as opposed to a more common volume. My approach allowed me to pick where the source data is stored, whereas had I specified a volume, it would have picked some bizarre place on my WSL system to persist the volume data. There's probably a better approach, but I'm still learning Docker builds.
version: "3.2"
services:
ssc-file-generator-db2-test:
container_name: "ssc-file-generator-db2-test"
image: ibmcom/db2:latest
hostname: db2server
privileged: true
ports:
- 50100:50000
- 55100:55000
networks:
- back-tier
restart: "no"
volumes:
- setup-sql:/setup-sql
- db2-shell-scripts:/var/custom
- host-dirs:/host-dirs
# Uncomment below to use database outside the container
# - database:/database
env_file:
- acceptance-run.environment
networks:
back-tier: {}
volumes:
setup-sql:
driver: local
driver_opts:
o: bind
type: none
device: ./setup-sql
db2-shell-scripts:
driver: local
driver_opts:
o: bind
type: none
device: ./db2-shell-scripts
host-dirs:
driver: local
driver_opts:
o: bind
type: none
device: ./host-dirs

Related

Can't access .Net Core application through Docker Container

I'm pretty new with docker, but running up against a wall trying to get my implementation to work through Docker Compose.
Docker Compose File
version: "3.4"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
- 5001:5001
Dockerfile
# syntax=docker/dockerfile:1
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Install EF tools
RUN dotnet tool install --global dotnet-ef
ENV PATH="${PATH}:/root/.dotnet/tools"
# Generate Certificates
RUN dotnet dev-certs https -ep ${HOME}/.aspnet/https/aspnetapp.pfx -p TestPassword
RUN dotnet dev-certs https --trust
# Copy everything else and build
COPY . ./
COPY Setup.sh Setup.sh
RUN dotnet restore
RUN dotnet build -c Release -o out
# Build runtime image
RUN chmod +x ./Setup.sh
CMD /bin/bash ./Setup.sh
Setup.sh Entry Point
#!/bin/bash
set -e
run_cmd="dotnet run --project Artis.Merchant.API/Artis.Merchant.API.csproj --launch-profile Artis.Merchant.API"
until dotnet ef database update --project Artis.Models/Artis.Models.csproj; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
exec $run_cmd
Launch Settings for Project
"profiles": {
"Artis.Merchant.API": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "swagger",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Local"
},
"applicationUrl": "https://localhost:5001;http://localhost:5000"
},
So both my http or https endpoints seem to be unreachable when launched through Docker. The CLI output is the exact same, so it appears to be up and running through Docker. Any ideas what I'm doing wrong?
Locally running dotnet run --project Artis.Merchant.API/Artis.Merchant.API.csproj --launch-profile Artis.Merchant.API
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Local
info: Microsoft.Hosting.Lifetime[0]
Content root path: C:\Users\ddrob\source\Artis.Merchant.API
The above works fine I and access it as normal through https://localhost:5001 or http://localhost:5000
Output from Docker after running docker compose up
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[14]
artis_api-app-1 | Now listening on: http://localhost:5000
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[14]
artis_api-app-1 | Now listening on: https://localhost:5001
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[0]
artis_api-app-1 | Application started. Press Ctrl+C to shut down.
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[0]
artis_api-app-1 | Hosting environment: Local
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[0]
artis_api-app-1 | Content root path: /app/Artis.Merchant.API
Been testing by just making an HTTP GET call to https://localhost:5001/api/health which should just return a 200. Works fine running locally, Docker returns either a socket hang up if accessed through non-https, and client network socket disconnected before secure TLS connection was established when accessed through https
EDIT
For more insight, here are the outputs from docker ps and docker inspect artis_api-app
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b71ffc2c5a1 artis_api-app "/bin/sh -c '/bin/ba…" 43 minutes ago Up 23 minutes 0.0.0.0:5000-5001->5000-5001/tcp artis_api-app-1
[
{
"Id": "sha256:505685ec26fb58cc87fe4ec5166f2b3c0978b251953f5e9d431776ad036fc839",
"RepoTags": [
"artis_api-app:latest"
],
"RepoDigests": [],
"Parent": "",
"Comment": "buildkit.dockerfile.v0",
"Created": "2022-09-14T18:40:33.7696322Z",
"Container": "",
"ContainerConfig": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": null,
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"DockerVersion": "",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.dotnet/tools",
"ASPNETCORE_URLS=",
"DOTNET_RUNNING_IN_CONTAINER=true",
"DOTNET_VERSION=6.0.9",
"ASPNET_VERSION=6.0.9",
"DOTNET_GENERATE_ASPNET_CERTIFICATE=false",
"DOTNET_NOLOGO=true",
"DOTNET_SDK_VERSION=6.0.401",
"DOTNET_USE_POLLING_FILE_WATCHER=true",
"NUGET_XMLDOC_MODE=skip",
"POWERSHELL_DISTRIBUTION_CHANNEL=PSDocker-DotnetSDK-Debian-11"
],
"Cmd": [
"/bin/sh",
"-c",
"/bin/bash ./Setup.sh"
],
"ArgsEscaped": true,
"Image": "",
"Volumes": null,
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 6245959471,
"VirtualSize": 6245959471,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/uqv91vlzccv2h5y9g9mzcbl2a/diff:/var/lib/docker/overlay2/dewh3h889lvrnae1qq1gaucga/diff:/var/lib/docker/overlay2/21fioiit0habui6whx5x56e4w/diff:/var/lib/docker/overlay2/xgcecdok1kdh5je5ti1xuyylx/diff:/var/lib/docker/overlay2/yxu9wlvfmwn5rp9lth9f9ff8s/diff:/var/lib/docker/overlay2/imqsg5pxz3xkzxsaucymd3xv5/diff:/var/lib/docker/overlay2/etcmf1dskml8g13hg0eeqgtrq/diff:/var/lib/docker/overlay2/3orhtkibg5z6eeosvazrsjjwx/diff:/var/lib/docker/overlay2/e903ebed1d7345de8bd1780dbb132091b2930ddaa014ef4f399ce87be1e105d4/diff:/var/lib/docker/overlay2/d49d1b24c5821a65689ef573e9f6e7f4562cc2cd7fb4d1e8f2f6144f31cbb1dc/diff:/var/lib/docker/overlay2/bbf9acaefd8441bb31972a56526870d63995056b59f59166560007d0a114b2f9/diff:/var/lib/docker/overlay2/2306e276d00d98d2aaff2af655cba48efe5b4c0c072f54b6883818a5063d2623/diff:/var/lib/docker/overlay2/7ab5f58b84e23d093901394612c6cc71f8b704f8408fcaab10ed9c2b799e71e4/diff:/var/lib/docker/overlay2/2cd518af44aa8bae9adaa6460d9c88a761f2fdccdd36b69ddcd4809d26fbb2a0/diff:/var/lib/docker/overlay2/ea6cf0ad92836e7d871430c029e19c688f5b7caeaee475f1b7fb18b215511cd9/diff:/var/lib/docker/overlay2/892c98581f39f02c87c8ca05c73b98bdc29ab6862ebb2e70ee8c7006d1f90ccf/diff",
"MergedDir": "/var/lib/docker/overlay2/u0v4b4saadkhe84ztlc2blsnj/merged",
"UpperDir": "/var/lib/docker/overlay2/u0v4b4saadkhe84ztlc2blsnj/diff",
"WorkDir": "/var/lib/docker/overlay2/u0v4b4saadkhe84ztlc2blsnj/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:b45078e74ec97c5e600f6d5de8ce6254094fb3cb4dc5e1cc8335fb31664af66e",
"sha256:5ec686cbc3c7aea55c4f80c574962f120e5f81ed0b7ccacbbde51f8d819f8247",
"sha256:e487c4dad54c656811ed2064a764f7240bb3b5936d497ec757991f9344be616d",
"sha256:64d665f70cc187b1c5a5c1fc8d0a4431ea0f0069c1985ce330c55523466c22b2",
"sha256:efccb7d95dee67a40c57966066356e47a301cc6028db923ab58fe4f21564fefb",
"sha256:b4d89feca49ab5217ab7d09079ab1f07c618570d0822ad74ccce634313cb0c91",
"sha256:d5471ff23747a10089f58397f39c2f66c6c8937687dd2b3592ab6fe09c6756d0",
"sha256:08affa1f86cb7d7262620e2517bc3308598db5d047135c5b7db00994d22f6701",
"sha256:6f1bf9eb1c14548bc0b119efb283637880394c2cae2117de367238ab3b7fcb80",
"sha256:be544cd1ea837e49241d66815afaeddfe79b3d869972f64451e3c2dcdf8c10eb",
"sha256:7c2ba258f59487f4dacf912a4f2c0a7598e2b4082283555fd5ca127da145cdc9",
"sha256:3bcd28ee7f58f12baceeb1ab2c097675ca35e66f3e350869a73d5bc508fcfd57",
"sha256:11f2bbe76890bfd069e0f1b75dd4acd35dc33f071f061e5c6751e5af7723e897",
"sha256:d5f7661f5ac2f07ba5836972af96f89358381cbb5399342ef6fd03b6306fe000",
"sha256:5879a59c7c2aba6ffcc9535592116cc92e7efd4d7acff9f7716e1e2ac167c3e8",
"sha256:fdedee5f422b0f13237dd54f9894c64c4323f48d7d06e10a7640991ce9dd62ce",
"sha256:01104bf88915fea5e113a37e57b450afa15ec647816e61b17bea508082a8ee32"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
I noticed the outputs are different when I inspect artis_api-app-1 instead of the Image named artis_api-app. I figured the only relevant portion of the former is the network settings output.
"NetworkSettings": {
"Bridge": "",
"SandboxID": "174a63cebce5ca8ff7c6b218a73c90e5b813caad60b008777a1fdb031c595ced",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5000"
}
],
"5001/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5001"
}
]
},
"SandboxKey": "/var/run/docker/netns/174a63cebce5",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"artis_api_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"artis_api-app-1",
"app",
"4b71ffc2c5a1"
],
"NetworkID": "dc60832ac2b09450067b3edeb1cd944ad9c7c4805a674da5dba456654db49125",
"EndpointID": "cd04c2b9261b8ec3a095a6c3db6001484e41a41ca0dd3c32a72c093cf3787e7b",
"Gateway": "172.22.0.1",
"IPAddress": "172.22.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:16:00:03",
"DriverOpts": null
}
}
}
EDIT 2
Here is my initial Program.cs entry point since I've had some questions regarding that.
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureAppConfiguration((context, configBuilder) =>
{
string assemblyName = Assembly.GetExecutingAssembly().GetName().Name;
string envName = context.HostingEnvironment.EnvironmentName;
configBuilder.Sources.Clear();
configBuilder.AddJsonFile($"{assemblyName}.appsettings.json", optional: false, reloadOnChange: true);
configBuilder.AddJsonFile($"{assemblyName}.appsettings.{envName}.json", optional: true, reloadOnChange: true);
});
webBuilder.UseStartup<Startup>();
});
}
Answer
Along with the accepted answer I also needed to include a Kestrel configuration in my appsettings.json file such as this:
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://0.0.0.0:5000"
},
"Https": {
"Url": "https://0.0.0.0:5001"
}
},
"EndpointDefaults": {
"Url": "https://0.0.0.0:5001",
"Protocols": "Http1"
}
}
Your application is listen for connection inside container (localhost) only.
Now listening on: http://localhost:5000
To be able to pass-through the port, the application should listen external network adapter. It is enough to listen http://0.0.0.0:5000.
Please, try providing a value for the environment variable ASPNETCORE_URLS when running your container.
For example, in docker-compose:
version: "3.4"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
- 5001:5001
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ASPNETCORE_URLS=https://+:5001;http://+:5000
# please, review the provided path, according to your
# setup I am unsure whether it is exact or not
- ASPNETCORE_Kestrel__Certificates__Default__Path=${HOME}/.aspnet/https/aspnetapp.pfx
# consider use an env varible to provide the password, to avoid
# putting under version control system sensitive information
- ASPNETCORE_Kestrel__Certificates__Default__Password=TestPassword
This will allow your app to listen in all the network interfaces available: on the contrary, it will only be accesible through localhost but be aware that localhost in that context is the container itself.
You could provide an analogous information in the Dockerfile as well.
Please, notice that in order to support HTTPS I included information about the location of the pfx bundle and the corresponding password you used when building the image. It is necessary for that purpose: consider read the provided Microsoft documentation.
A final word about the certificates: as you can see in the mentioned documentation, the certificates used by the application are usually mounted through a docker volume to avoid including it in your Docker image - making it public in practice. It is fine for a POC but please, never store your production certificates and passwords like this.
Having the same issue previously caused by port setting in Programm.cs
Please check
Interface IWebHostBuilder you might have specified specific port like this
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseWebRoot("")
.UseUrls(urls: "http://localhost:5000");
and change it with this
public static IHostBuilder CreateWebHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});

Docker network affecting host network on container restart : ERR_NETWORK_CHANGED in chrome

I am facing a weird issue with the docker networks. I am using an external bridge network named extenal_network in my docker containers with auto-restart enabled. I am not able to access my host network if any of the containers is restarting due to some error, maybe code or infra related.
Please refer attached screenshot for more clarity.
I've tried the below links but no luck.
https://superuser.com/questions/1336567/installing-docker-ce-in-ubuntu-18-04-breaks-internet-connectivity-of-host
https://success.docker.com/article/how-do-i-influence-which-network-address-ranges-docker-chooses-during-a-docker-network-create
https://forums.docker.com/t/cant-access-internet-after-installing-docker-in-a-fresh-ubuntu-18-04-machine/53416
https://superuser.com/questions/747735/regularly-getting-err-network-changed-errors-in-chrome/773971
Dockerfile
FROM node:10.20.1-alpine
RUN apk add --no-cache python make g++
WORKDIR /home/app
COPY package.json package-lock.json* ./
RUN npm install
COPY . .
Docker-Compose
version: "3"
services:
app:
container_name: app
build:
context: ./
dockerfile: Dockerfile
image: 'network_poc:latest'
ports:
- 8080:8080
deploy:
resources:
limits:
memory: 2G
networks:
- extenal_network
restart: always
command: node index.js
networks:
shared_network:
external:
name: extenal_network
docker inspect extenal_network
[
{
"Name": "extenal_network",
"Id": "96476c227ddc14aa23d376392d380b2674fcbad109c90e7436c0cddd5c0a9ac5",
"Created": "2020-04-14T00:17:10.89980675+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7621a2b5a7e460a905bf86c427aea38b6374ac621c0c1a2b9eca4b671aea4dfe": {
"Name": "app",
"EndpointID": "04e9d14a17af05eb7a2b478526365cbce7f726a62f5e2cd315244c2639891b1e",
"MacAddress": "**:**:**:**:**:**",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Any help or suggestion is highly appreciable.

How to make docker container to use host network?

I am running docker for mac. My docker compose configuration file is:
version: "2.3"
services:
base:
build:
context: .
dev:
network_mode: "host"
extends:
service: base
when the container is launched via docker-compose run --rm dev sh, it can't ping a IP address (172.25.36.32). But I can ping this address from host. I have set network_mode: "host" on the configuration file. How can I make the docker container share host network?
I found that host network doesn't work for Mac. Is there a solution for that in Mac?
Below is the docker network inspect ID output:
[
{
"Name": "my_container_default",
"Id": "0441cf2b99b692d2047ded88d29a470e2622a1669a7bfce96804b50d609dc3b0",
"Created": "2019-08-27T06:06:30.984427063Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"22d3e7500ccfdc7fcd192a9f5977ef32e086e340908b1c0ff007e4144cc91f2e": {
"Name": "time-series-api_dev_run_b35174fdf692",
"EndpointID": "23924b4f68570bc99e01768db53a083533092208a3c8c92b20152c7d2fefe8ce",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "time-series-api",
"com.docker.compose.version": "1.24.1"
}
}
]
i believe you need to add network option during the build. Try with
version: "2.3"
services:
base:
build:
context: .
network: host
dev:
network_mode: "host"
extends:
service: base
EDIT: Works on Linux, please see documentation for Mac
The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
I think you need to start the container with up option not run since run override many options:
docker-compose up dev
or you may try with --use-aliases with run
--use-aliases Use the service's network aliases in the network(s) the
container connects to.
see this
P.S
After your update
the following will work on MAC
dev:
network: host
extends:
service: base

How to make local changes reflect in docker container upon saving

I am trying to play with docker and I am still in the process of setting up my development environment. I am trying to set up my container in such a way that I can save changes on the host machine and have them propagated to the container.
I thought by using a volume I was mounting my local ./code directory into the container. I was hoping that by doing this I could run docker-compose up, develop, save and have those changes pushed into the container. Though after saving I am not seeing my changes reflected in the app until I kill it and run docker-compose up again. Am I using the correct concepts? or is this possible with docker?
Dockerfile
FROM node:10.13-alpine
ENV NODE_ENV production
# RUN mkdir /code
WORKDIR /code
docker-compose.yml
version: '2.1'
services:
express-docker-test:
build: code
# command: npm install --production --silent && mv node_modules ../
volumes:
- E:\git\express-docker-test\code\:/code/
environment:
NODE_ENV: production
expose:
- "3000"
ports:
- "3000:3000"
command: npm start
Here is a test repo I am experimenting with. I would like it if I could, with this repo, run docker-compose up. Then make an edit to the response say go from
- Hello world!!!
+ Hello World.
and then be able to refresh localhost:3000 and see the new response.
Results from docker inspect express-docker-test_express-docker-test
[
{
"Id": "sha256:791e8fbd5c871af53a37f5e9f5058e423f8ddf914b09f21ff1d80a40ea4f142f",
"RepoTags": [
"express-docker-test_express-docker-test:latest"
],
"RepoDigests": [],
"Parent": "sha256:e87553281ff90e49977fb6166c70c0e5ebf7bb98f0ae06468d7883dc0314c606",
"Comment": "",
"Created": "2019-07-14T19:15:47.1174538Z",
"Container": "8426c3db78c23c1cfc594c500ce77adf81ddd43ca52ca58456ba5b49ec60fee9",
"ContainerConfig": {
"Hostname": "8426c3db78c2",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NODE_VERSION=10.13.0",
"YARN_VERSION=1.10.1",
"NODE_ENV=production"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) WORKDIR /code"
],
"ArgsEscaped": true,
"Image": "sha256:e87553281ff90e49977fb6166c70c0e5ebf7bb98f0ae06468d7883dc0314c606",
"Volumes": null,
"WorkingDir": "/code",
"Entrypoint": null,
"OnBuild": [],
"Labels": {}
},
"DockerVersion": "18.09.2",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NODE_VERSION=10.13.0",
"YARN_VERSION=1.10.1",
"NODE_ENV=production"
],
"Cmd": [
"node"
],
"ArgsEscaped": true,
"Image": "sha256:e87553281ff90e49977fb6166c70c0e5ebf7bb98f0ae06468d7883dc0314c606",
"Volumes": null,
"WorkingDir": "/code",
"Entrypoint": null,
"OnBuild": [],
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 70255071,
"VirtualSize": 70255071,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/8a0293f507ec80f68197b3bcb49b12e1a61c08f122a8a85060ab2d766d881a93/diff:/var/lib/docker/overlay2/a159e965ecf6e91533397def52fb1b3aef900c9793f933dc5120eed2381a37f4/diff:/var/lib/docker/overlay2/49f6493daf037bc88f9fb474ae71e36f13b58224082ff1f28ee367c795207c8d/diff",
"MergedDir": "/var/lib/docker/overlay2/9c4a3691e175f15585034acf7146ce5601f9e9d9c9cb60023bd348172116ae5c/merged",
"UpperDir": "/var/lib/docker/overlay2/9c4a3691e175f15585034acf7146ce5601f9e9d9c9cb60023bd348172116ae5c/diff",
"WorkDir": "/var/lib/docker/overlay2/9c4a3691e175f15585034acf7146ce5601f9e9d9c9cb60023bd348172116ae5c/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:df64d3292fd6194b7865d7326af5255db6d81e9df29f48adde61a918fbd8c332",
"sha256:387bc77dd3f21547b74752dd03f6018b5d750684c832c39cd239704052ce366e",
"sha256:2faeaaebb1134fe62e2cc1603a761301e281c04a8e2e36ff2ac1005f7c06780f",
"sha256:dfb2bf93a77a1907921d8f1622e831fa31c924c6c612e593b8937f95e42a0afa"
]
},
"Metadata": {
"LastTagTime": "2019-07-14T19:31:00.4646905Z"
}
}
]
Don't RUN mkdir /code in the Dockerfile.
If you're mounting a local directory into /code, then your Dockerfile should just expect it be exist (have been mounted by docker run ...--volume=` or by Docker Compose as you have.
Leave WORKDIR /code.
I thought mounts had to be absolute paths but if ./code then works, you're good.

how can a docker container on 2 subnets access the internet? (using docker-compose)

I have a container with 2 subnets:
one is the reverse proxy subnet
the second one is the internal subnet for the different containers of that project
The container needs to access an external SMTP server (on mailgun.com), but it looks like, with docker-compose, you can put a container on both one or more subnets and give it access to the host network at the same time.
Is there a way to allow this container to initiate connections to the outside world?
and, if no, what common workarounds are used? (for example, adding an extra IP to the container to be on the host network, etc.)
This is the docker compose file:
version: '2.3'
services:
keycloak:
container_name: keycloak
image: jboss/keycloak
restart: unless-stopped
volumes:
- '/appdata/keycloak:/opt/jboss/keycloak/standalone/data'
expose:
- 8080
external_links:
- auth
networks:
- default
- nginx
environment:
KEYCLOAK_USER: XXXX
KEYCLOAK_PASSWORD: XXXX
PROXY_ADDRESS_FORWARDING: 'true'
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
VIRTUAL_HOST: auth.XXXX.com
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: auth.XXXX.com
LETSENTRYPT_EMAIL: admin#XXXX.com
networks:
default:
external:
name: app-network
nginx:
external:
name: nginx-proxy
The networks are as follows:
$ dk network ls
NETWORK ID NAME DRIVER SCOPE
caba49ae8b1c bridge bridge local
2b311986a6f6 app-network bridge local
67f70f82aea2 host host local
9e0e2fe50385 nginx-proxy bridge local
dab9f171e37f none null local
and nginx-proxy network info is:
$ dk network inspect nginx-proxy
[
{
"Name": "nginx-proxy",
"Id": "9e0e2fe503857c5bc532032afb6646598ee0a08e834f4bd89b87b35db1739dae",
"Created": "2019-02-18T10:16:38.949628821Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"360b49ab066853a25cd739a4c1464a9ac25fe56132c596ce48a5f01465d07d12": {
"Name": "keycloak",
"EndpointID": "271ed86cac77db76f69f6e76686abddefa871b92bb60a007eb131de4e6a8cb53",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"379dfe83d6739612c82e99f3e8ad9fcdfe5ebb8cdc5d780e37a3212a3bf6c11b": {
"Name": "nginx-proxy",
"EndpointID": "0fcf186c6785dd585b677ccc98fa68cc9bc66c4ae02d086155afd82c7c465fef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"4c944078bcb1cca2647be30c516b8fa70b45293203b355f5d5e00b800ad9a0d4": {
"Name": "adminmongo",
"EndpointID": "65f1a7a0f0bcef37ba02b98be8fa1f29a8d7868162482ac0b957f73764f73ccf",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"671cc99775e09077edc72617836fa563932675800cb938397597e17d521c53fe": {
"Name": "portainer",
"EndpointID": "950e4b5dcd5ba2a13acba37f50e315483123d7da673c8feac9a0f8d6f8b9eb2b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"90a98111cbdebe76920ac2ebc50dafa5ea77eba9f42197216fcd57bad9e0516e": {
"Name": "kibana",
"EndpointID": "fe1768274eec9c02c28c74be0104326052b9b9a9c98d475015cd80fba82ec45d",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Update:
The following test was done to test the solution proposed by lbndev:
a test network was created:
# docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
test_network
e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9
we can display the contents:
# docker inspect test_network
[
{
"Name": "test_network",
"Id": "e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9",
"Created": "2019-02-24T21:52:44.678870135+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Then we can inspect the container:
I put the contents on pastebin: https://pastebin.com/5bJ7A9Yp since it's quite large and would make this post unreadable.
and testing:
# docker exec -it 5d09230158dd sh
sh-4.2$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10006ms
So, we couldn't get this solution to work.
Looks like your bridge network is missing a few options, to allow it to reach the outside world.
Try executing docker network inspect bridge (the default bridge network). You'll see this in the options :
...
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
...
On your nginx-proxy network, these are missing.
You should delete your network and re-create it with these additional options. From the documentation on user-defined bridged networks and docker network create command :
docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
nginx-proxy
Enabling ICC or not is up to you.
What will enable you to reach your mail server is ip_masquerade to be enabled. Without this setup, your physical infrastructure (= network routers) would need to properly route the IPs of the docker network subnet (which I assume is not the case).
Alternatively, you could configure your docker network's subnet, ip range and gateway, to match those of your physical network.
In the end, the problem turned out to be very simple:
In the daemon.json file, in the docker config, there was the following line:
{"iptables": false, "dns": ["1.1.1.1", "1.0.0.1"]}
It comes from the setup scripts we’ve been using and we didn’t know about iptables:false
It prevents docker from updating the host’s iptables; while the bridge networks were set up correctly, there was no communication possible with the outside.
While simple in nature, it proved very long to find, so I’m posting it as an answer with the hope it might help someone.
Thanks to everyone involved for trying to solve this issue!

Resources