I am using the following command to start the MLflow server:
mlflow server --backend-store-uri postgresql://mlflow_user:mlflow#localhost/mlflow --artifacts-destination <S3 bucket location> --serve-artifacts -h 0.0.0.0 -p 8000
Before production deployment, we have a requirement that we need to print or fetch the under what configurations the server is running. For example, the above command uses localhost postgres connection and S3 bucket.
Is there a way to achieve this?
Also, how do I set the server's environment as "production"? So finally I should see a log like this:
[LOG] Started MLflow server:
Env: production
postgres: localhost:5432
S3: <S3 bucket path>
You can wrap it in a bash script or in a Makefile script, e.g.
start_mlflow_production_server:
#echo "Started MLflow server:"
#echo "Env: production"
#echo "postgres: localhost:5432"
#echo "S3: <S3 bucket path>"
#mlflow server --backend-store-uri postgresql://mlflow_user:mlflow#localhost/mlflow --artifacts-destination <S3 bucket location> --serve-artifacts -h 0.0.0.0 -p 8000
Additionally, it you can set and use environment variables specific to that server and print and use those in the command.
I am trying to pass an environment variable into my devcontainer that is the output of a command run on my dev machine. I have tried the following in my devcontainer.json with no luck:
"initializeCommand": "export DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"",
"containerEnv": {
"DOCKER_HOST_IP1": "${localEnv:DOCKER_HOST_IP}",
"DOCKER_HOST_IP2": "${containerEnv:DOCKER_HOST_IP}"
},
and
"runArgs": [
"-e DOCKER_HOST_IP=\"$(ifconfig | grep -E '([0-9]{1,3}.){3}[0-9]{1,3}' | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d: | head -n1)\"
],
(the point of the ifconfig/grep piped command is to provide me with the IP of my docker host which is running via Docker for Desktop (Mac))
Some more context
Within my devcontainer I am running some kubectl deployments (to a cluster running on Docker for Desktop) where I would like to configure a hostAlias for a pod (docs) such that that pod will direct requests to https://api.cancourier.local to the ip of the docker host (which would then hit an ingress I have configured for that CNAME).
I could just pass in the output of the ifconfig command to my kubectl command when running from within the devcontainer. The problem is that I get two different results from this depending on whether I am running it on my host (10.0.0.89) or from within the devcontainer (10.1.0.1). 10.0.0.89 in this case is the "correct" IP as if I curl this from within my devcontainer, or my deployed pod, I get the response I'd expect from my ingress.
I'm also aware that I could just use the name of my k8s service (in this case api) to communicate between pods, but this isn't ideal. As for why, I'm running a Next.js application in a pod. The Next.js app on this pod has two "contexts":
my browser - the app serves up static HTML/JS to my browser where communicating with https://api.cancourier.local works fine
on the pod itself - running some things (ie. _middleware) on the pod itself, where the pod does not currently know what https://api.cancourier.local
What I was doing to temporarily get around this was to have a separate config on the pod, one for the "browser context" and the other for things running on the pod itself. This is less than ideal as when I go to deploy this Next.js app (to Vercel) it won't be an issue (as my API will be deployed on some publicly accessible CNAME). If I can accomplish what I was trying to do above, I'd be able to avoid this.
So I didn't end up finding a way to pass the output of a command run on the host machine as an env var into my devcontainer. However I did find a way to get the "correct" docker host IP and pass this along to my pod.
In my devcontainer.json I have this:
"runArgs": [
// https://stackoverflow.com/a/43541732/3902555
"--add-host=api.cancourier.local:host-gateway",
"--add-host=cancourier.local:host-gateway"
],
which augments the devcontainer's /etc/hosts with:
192.168.65.2 api.cancourier.local
192.168.65.2 cancourier.local
then in my Makefile where I store my kubectl commands I am simply running:
deploy-the-things:
DOCKER_HOST_IP = $(shell cat /etc/hosts | grep 'api.cancourier.local' | awk '{print $$1}')
helm upgrade $(helm_release_name) $(charts_location) \
--install \
--namespace=$(local_namespace) \
--create-namespace \
-f $(charts_location)/values.yaml \
-f $(charts_location)/local.yaml \
--set cwd=$(HOST_PROJECT_PATH) \
--set dockerHostIp=$(DOCKER_HOST_IP) \
--debug \
--wait
then within my helm chart I can use the following for the pod running my Next.js app:
hostAliases:
- ip: {{ .Values.dockerHostIp }}
hostnames:
- "api.cancourier.local"
Highly recommend following this tutorial: Container environment variables
In this tutorial, 2 methods are mentioned:
Adding individual variables
Using env file
Choose which is more comfortable for you, good luck))
I don't suppose anyone knows if it's possible to call the docker run or docker compose up commands from a web app?
I have the following scenario in which I have a react app that uses openlayers for it's maps. I have it so that when the user loses internet connection it fallback onto making the requests to a map server running locally on docker. The issue is that the user needs to manually start the server via the command line. To make things easier for the user, I added the following bash script and docker compose file to boot up the server with a single command, but was wondering if I could incorporate that functionality into the web app and have the user boot the map server by the click of a button?
Just for references sake these are my bash and compose files.
#!/bin/sh
dockerDown=`docker info | grep -qi "ERROR" && echo "stopped"`
if [ $dockerDown ]
then
echo "\n ********* Please start docker before running this script ********* \n"
exit 1
fi
skipInstall="no"
read -p "Have you imported the maps already and just want to run the app (y/n)?" choice
case "$choice" in
y|Y ) skipInstall="yes";;
n|N ) skipInstall="no";;
* ) skipInstall="no";;
esac
pbfUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei-latest.osm.pbf'
#polyUrl='https://download.geofabrik.de/asia/malaysia-singapore-brunei.poly'
#-e DOWNLOAD_POLY=$polyUrl \
docker volume create openstreetmap-data
docker volume create openstreetmap-rendered-tiles
if [ $skipInstall = "no" ]
then
echo "\n ***** IF THIS IS THE FIRST TIME, YOU MIGHT WANT TO GO GET A CUP OF COFFEE WHILE YOU WAIT ***** \n"
docker run \
-e DOWNLOAD_PBF=$pbfUrl \
-v openstreetmap-data:/var/lib/postgresql/12/main \
-v openstreetmap-rendered-tiles:/var/lib/mod_tile \
overv/openstreetmap-tile-server \
import
echo "Finished Postgres container!"
fi
echo "\n *** BOOTING UP SERVER CONTAINER *** \n"
docker compose up
My docker compose file
version: '3'
services:
map:
image: overv/openstreetmap-tile-server
volumes:
- openstreetmap-data:/var/lib/postgresql/12/main
- openstreetmap-rendered-tiles:/var/lib/mod_tile
environment:
- THREADS=24
- OSM2PGSQL_EXTRA_ARGS=-C 4096
- AUTOVACUUM=off
ports:
- "8080:80"
command: "run"
volumes:
openstreetmap-data:
external: true
openstreetmap-rendered-tiles:
external: true
There is the Docker API, and you are able to start containers,
In the Docker documentation,
https://docs.docker.com/engine/api/
To start the containers using the Docker API
https://docs.docker.com/engine/api/v1.41/#operation/ContainerStart
I created docker container with this command:
docker run --name artifactory -d -p 8081:8081 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
-e EXTRA_JAVA_OPTIONS='-Xms128M -Xmx512M -Xss256k -XX:+UseG1GC' \
docker.bintray.io/jfrog/artifactory-oss:latest
and started artifactory, but the response I get is 404 - not found
If u access http://99.79.191.172:8081/artifactory u see it
If you follow the Artifactory Docker install documentation, you'll see you also need to expose port 8082 for the new JFrog Router, which is now handling the traffic coming in to the UI (and other services as needed).
This new architecture is from Artifactory 7.x. By setting latest as the repository tag, you don't have full control of what version you are running...
So your command should look like
docker run --name artifactory -p 8081:8081 -d -p 8082:8082 \
-v /jfrog/artifactory:/var/opt/jfrog/artifactory \
docker.bintray.io/jfrog/artifactory-oss:latest
For controlling the configuration (like the Java options you want), it's recommended to use the Artifactory system.yaml configuration. This file is the best way to control all aspects of the Artifactory system configuration.
I start my instance with
sudo groupadd -g 1030 artifactory
sudo useradd -u 1030 -g artifactory artifactory
sudo chown artifactory:artifactory /daten/jfrog -R
docker run \
-d \
--name artifactory \
-v /daten/jfrog/artifactory:/var/opt/jfrog/artifactory \
--user "$(id -u artifactory):$(id -g artifactory)" \
--restart always \
-p 8084:8081 -p 9082:8082 releases-docker.jfrog.io/jfrog/artifactory-oss:latest
This is my /daten/jfrog/artifactory/etc/system.yaml (I changed nothing manually)
## #formatter:off
## JFROG ARTIFACTORY SYSTEM CONFIGURATION FILE
## HOW TO USE: comment-out any field and keep the correct yaml indentation by deleting only the leading '#' character.
configVersion: 1
## NOTE: JFROG_HOME is a place holder for the JFrog root directory containing the deployed product, the home directory for all JFrog products.
## Replace JFROG_HOME with the real path! For example, in RPM install, JFROG_HOME=/opt/jfrog
## NOTE: Sensitive information such as passwords and join key are encrypted on first read.
## NOTE: The provided commented key and value is the default.
## SHARED CONFIGURATIONS
## A shared section for keys across all services in this config
shared:
## Java 11 distribution to use
#javaHome: "JFROG_HOME/artifactory/app/third-party/java"
## Extra Java options to pass to the JVM. These values add to or override the defaults.
#extraJavaOpts: "-Xms512m -Xmx2g"
## Security Configuration
security:
## Join key value for joining the cluster (takes precedence over 'joinKeyFile')
#joinKey: "<Your joinKey>"
## Join key file location
#joinKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/join.key>"
## Master key file location
## Generated by the product on first startup if not provided
#masterKeyFile: "<For example: JFROG_HOME/artifactory/var/etc/security/master.key>"
## Maximum time to wait for key files (master.key and join.key)
#bootstrapKeysReadTimeoutSecs: 120
## Node Settings
node:
## A unique id to identify this node.
## Default auto generated at startup.
#id: "art1"
## Default auto resolved by startup script
#ip:
## Sets this node as primary in HA installation
#primary: true
## Sets this node as part of HA installation
#haEnabled: true
## Database Configuration
database:
## One of mysql, oracle, mssql, postgresql, mariadb
## Default Embedded derby
## Example for postgresql
#type: postgresql
#driver: org.postgresql.Driver
#url: "jdbc:postgresql://<your db url, for example: localhost:5432>/artifactory"
#username: artifactory
#password: password
I see this in router-request.log
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43740","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":3608608,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.56803042Z","level":"info","msg":"","request_Uber-Trace-Id":"664d23ea1941d9b0:410817c2c69f2849:31b50a1adccb9846:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43734","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":4000683,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751867Z","level":"info","msg":"","request_Uber-Trace-Id":"23967a8743252dd8:436e2a5407b66e64:31cfc496ccc260fa:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43736","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":4021195,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751873Z","level":"info","msg":"","request_Uber-Trace-Id":"28300761ec7b6cd5:36588fa084ee7105:10fbdaadbc39b21e:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43622","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":3918873,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.567751891Z","level":"info","msg":"","request_Uber-Trace-Id":"6d57920d087f4d0f:26b9120411520de2:49b0e61895e17734:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8040","ClientAddr":"127.0.0.1:43742","DownstreamContentSize":95,"DownstreamStatus":404,"Duration":2552815,"RequestMethod":"GET","RequestPath":"/access/api/v1/users/jffe#000?expand=groups","StartUTC":"2021-12-30T11:49:19.569112324Z","level":"info","msg":"","request_Uber-Trace-Id":"d4a7bb216cf31eb:5c783ae80b95778f:fd11882b03eb63f:0","request_User-Agent":"JFrog Access Java Client/7.29.9 72909900 Artifactory/7.29.8 72908900","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43730","DownstreamContentSize":45,"DownstreamStatus":200,"Duration":18106757,"RequestMethod":"POST","RequestPath":"/artifactory/api/auth/loginRelatedData","StartUTC":"2021-12-30T11:49:19.557661286Z","level":"info","msg":"","request_Uber-Trace-Id":"d4a7bb216cf31eb:640bf3bca741e43b:28f0abcfc40f203:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43726","DownstreamContentSize":169,"DownstreamStatus":200,"Duration":19111069,"RequestMethod":"GET","RequestPath":"/artifactory/api/crowd","StartUTC":"2021-12-30T11:49:19.557426794Z","level":"info","msg":"","request_Uber-Trace-Id":"664d23ea1941d9b0:417647e0e0fd0911:55e80b7f7ab0724e:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43724","DownstreamContentSize":496,"DownstreamStatus":200,"Duration":19308753,"RequestMethod":"GET","RequestPath":"/artifactory/api/securityconfig","StartUTC":"2021-12-30T11:49:19.557346739Z","level":"info","msg":"","request_Uber-Trace-Id":"6d57920d087f4d0f:7bdba564c07f8bc5:71b1b99e1e406d5f:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43728","DownstreamContentSize":2,"DownstreamStatus":200,"Duration":19140699,"RequestMethod":"GET","RequestPath":"/artifactory/api/saml/config","StartUTC":"2021-12-30T11:49:19.557516365Z","level":"info","msg":"","request_Uber-Trace-Id":"23967a8743252dd8:2f9035e56dd9f0c5:4315ec00a6b32eb4:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
{"BackendAddr":"localhost:8081","ClientAddr":"127.0.0.1:43732","DownstreamContentSize":148,"DownstreamStatus":200,"Duration":18907203,"RequestMethod":"GET","RequestPath":"/artifactory/api/httpsso","StartUTC":"2021-12-30T11:49:19.557786692Z","level":"info","msg":"","request_Uber-Trace-Id":"28300761ec7b6cd5:2767cf480f6ebd73:2c013715cb58b384:0","request_User-Agent":"JFrog-Frontend/1.29.6","time":"2021-12-30T11:49:19Z"}
I've to change the port to 8084 (it's already occupied) But I run into 404 as well.
Who knows how to solve it ?
I'm having big trouble running Vault in docker-compose.
My requirements are :
running as deamon (so restarting when I restart my Mac)
secret being persisted between container restart
no human intervention between restart (unsealing, etc.)
using a generic token
My current docker-compose
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
ports:
- "8200:8200"
volumes:
- ./storagedc/vault/file:/vault/file
However, when the container restart, I get the log
==> Vault server configuration:
Api Address: http://0.0.0.0:8200
Cgo: disabled
Cluster Address: https://0.0.0.0:8201
Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: true, enabled: false
Storage: file
Version: Vault v1.2.1
Error initializing Dev mode: Vault is already initialized
Is there any recommendation on that matter?
I'm going to pseudo-code an answer to work around the problems specified, but please note that this is a massive hack and should NEVER be deployed in production as a hard-coded master key and single unseal key is COLOSSALLY INSECURE.
So, you want a test vault server, with persistence.
You can accomplish this, it will need a little bit of work because of the default behavior of the vault container - if you just start it, it will start with a dev mode container, which won't allow for persistence. Just adding persistence via the environment variable won't solve that problem entirely because it will conflict with the default start mode of the container.
so we need to replace this entrypoint script with something that does what we want it to do instead.
First we copy the script out of the container:
$ docker create --name vault vault:1.2.1
$ docker cp vault:/usr/local/bin/docker-entrypoint.sh .
$ docker rm vault
For simplicity, we're going to edit the file and mount it into the container using the docker-compose file. I'm not going to make it really functional - just enough to get it to do what's desired. The entire point here is sample, not something that is usable in production.
My customizations all start at about line 98 - first we launch a dev-mode server in order to record the unseal key, then we terminate the dev mode server.
# Here's my customization:
if [ ! -f /vault/unseal/sealfile ]; then
# start in dev mode, in the background to record the unseal key
su-exec vault vault server \
-dev -config=/vault/config \
-dev-root-token-id="$VAULT_DEV_ROOT_TOKEN_ID" \
2>&1 | tee /vault/unseal/sealfile &
while ! grep -q 'core: vault is unsealed' /vault/unseal/sealfile; do
sleep 1
done
kill %1
fi
Next we check for supplemental config. This is where the extra config goes for disabling TLS, and for binding the appropriate interface.
if [ -n "$VAULT_SUPPLEMENTAL_CONFIG" ]; then
echo "$VAULT_SUPPLEMENTAL_CONFIG" > "$VAULT_CONFIG_DIR/supplemental.json"
fi
Then we launch vault in 'release' mode:
if [ "$(id -u)" = '0' ]; then
set -- su-exec vault "$#"
"$#"&
Then we get the unseal key from the sealfile:
unseal=$(sed -n 's/Unseal Key: //p' /vault/unseal/sealfile)
if [ -n "$unseal" ]; then
while ! vault operator unseal "$unseal"; do
sleep 1
done
fi
We just wait for the process to terminate:
wait
exit $?
fi
There's a full gist for this on github.
Now the docker-compose.yml for doing this is slightly different to your own:
version: '2.3'
services:
vault-dev:
image: vault:1.2.1
restart: always
container_name: vault-dev
command: [ 'vault', 'server', '-config=/vault/config' ]
environment:
VAULT_DEV_ROOT_TOKEN_ID: "myroot"
VAULT_LOCAL_CONFIG: '{"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}'
VAULT_SUPPLEMENTAL_CONFIG: '{"ui":true, "listener": {"tcp":{"address": "0.0.0.0:8200", "tls_disable": 1}}}'
VAULT_ADDR: "http://127.0.0.1:8200"
ports:
- "8200:8200"
volumes:
- ./vault:/vault/file
- ./unseal:/vault/unseal
- ./docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh
cap_add:
- IPC_LOCK
The command is the command to execute. This is what's in the "$#"& of the script changes.
I've added VAULT_SUPPLEMENTAL_CONFIG for the non-dev run. It needs to specify the interfaces, it needs to turn of tls. I added the ui, so I can access it using http://127.0.0.1:8200/ui. This is part of the changes I made to the script.
Because this is all local, for me, test purposes, I'm mounting ./vault as the data directory, I'm mounting ./unseal as the place to record the unseal code and mounting ./docker-entrypoint.sh as the entrypoint script.
I can docker-compose up this and it launches a persistent vault - there are some errors on the log as I try to unseal before the server has launched, but it works, and persists across multiple docker-compose runs.
Again, to mention that this is completely unsuitable for any form of long-term use. You're better off using docker's own secrets engine if you're doing things like this.
I'd like to suggest a simpler solution for local development with docker-compose.
Vault is always unsealed
Vault UI is enabled and accessible at http://localhost:8200/ui/vault on your dev machine
Vault has predefined root token which can be used by services to communicate with it
docker-compose.yml
vault:
hostname: vault
container_name: vault
image: vault:1.12.0
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://0.0.0.0:8200"
ports:
- "8200:8200"
volumes:
- ./volumes/vault/file:/vault/file:rw
cap_add:
- IPC_LOCK
entrypoint: vault server -dev -dev-listen-address="0.0.0.0:8200" -dev-root-token-id="root"