Can't log MLflow artifacts to S3 with docker-based tracking server - docker

I'm trying to set up a simple MLflow tracking server with docker that uses a mysql backend store and S3 bucket for artifact storage. I'm using a simple docker-compose file to set this up on a server and supplying all of the credentials through a .env file. When I try to run the sklearn_elasticnet_wine example from the mlflow repo here: https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine usingTRACKING_URI = "http://localhost:5005 from the machine hosting my tracking server, the run fails with the following error: botocore.exceptions.NoCredentialsError: Unable to locate credentials. I've verified that my environment variables are correct and available in my mlflow_server container. The runs show up in my backend store so the run only seems to be failing at the artifact logging step. I'm not sure why this isn't working. I've seen a examples of how to set up a tracking server online, including: https://towardsdatascience.com/deploy-mlflow-with-docker-compose-8059f16b6039. Some use minio also but others just specify their s3 location as I have. I'm not sure what I'm doing wrong at this point. Do I need to explicitly set the ARTIFACT_URI as well? Should I be using Minio? Eventually, I'll be logging runs to the server from another machine, hence the nginx container. I'm pretty new to all of this so I'm hoping it's something really obvious and easy to fix but so far the Google has failed me. TIA.
version: '3'
services:
app:
restart: always
build: ./mlflow
image: mlflow_server
container_name: mlflow_server
expose:
- 5001
ports:
- "5001:5001"
networks:
- internal
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- DB_USER=${DB_USER}
- DB_PASSWORD=${DB_PASSWORD}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
command: >
mlflow server
--backend-store-uri mysql+pymysql://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
--default-artifact-root s3://${AWS_S3_BUCKET}/mlruns/
--host 0.0.0.0
--port 5001
nginx:
restart: always
build: ./nginx
image: mlflow_nginx
container_name: mlflow_nginx
ports:
- "5005:80"
networks:
- internal
depends_on:
- app
networks:
internal:
driver: bridge

Finally figure this out. I didn't realize that the client also needed to have access to the AWS credentials for S3 storage.

Related

How to simulate AWS S3 in docker-compose using MinIO?

I have an application server that must fetch data from AWS S3, e.g. https://my-bucket.s3.us-east-1.amazonaws.com/assets/images/557a84a8-bd4b-7a8e-81c9-d445228187c0.png
I want to test this application server using docker-compose.
I can spin up an MinIO server quite easily, but how do I configure things so that my application accesses the local MinIO server as if it were the AWS one?
I am using the standard .NET AWS SDK and I do not want to change my application code for testing (this would defeat the point of the tests).
What I have so far:
version: '3.9'
services:
s3:
image: quay.io/minio/minio:RELEASE.2022-08-13T21-54-44Z
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
restart: always
server:
image: server:latest
ports:
- "8080:8080"
environment:
AWS_ACCESS_KEY_ID: minio_access_key
AWS_SECRET_ACCESS_KEY: minio_secret_key
depends_on:
s3:
condition: service_started
You can set a network
alias
on your s3 container, to make it available as
my-bucket.s3.us-east-1.amazonaws.com.
You can tell minio server to recognize name-based buckets rooted at
s3.us-east-1.amazonaws.com by setting the MINIO_DOMAIN
environment variable (see the Server Confihttps://docs.docker.com/compose/compose-file/compose-file-v3/#networksguration
Guide
You can change the port on which minio listens by setting the
--address command line option (or by putting a proxy in front of
it)
That gets you:
services:
s3:
image: quay.io/minio/minio:RELEASE.2022-08-13T21-54-44Z
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
MINIO_DOMAIN: s3.us-east-1.amazonaws.com
restart: always
networks:
default:
aliases:
- my-bucket.s3.us-east-1.amazonaws.com
This will almost work: your bucket would be available at
http://my-bucket.s3.us-east-1.amazonaws.com:9000. If you want to
make it available at https://my-bucket.s3.us-east-1.amazonaws.com,
you would need to set up an SSL terminating proxy in front of it
(something like Traefik, Nginx, etc), and you would need to create and
install the necessary certificates so that your client trusts the
server.
Hopefully this is enough to point you in the right direction!

How to bind folders inside docker containers?

I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.

bitnami parse server with docker-compose give blank screen after dashboard login

I'm trying to run bitnami parse-server docker images with docker-compose configuration created by bitnami (link) locally (for testing)
i run the code provided on their page with ubuntu 20.04
$ curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-parse/master/docker-compose.yml > docker-compose.yml
$ docker-compose up -d
the dashboard run fine from the browser on http://localhost/login, but after entering the user and pass the browser start loading then ends up with blank white screen.
cosole errors
cosole errors
here's the docker-compose code
version: '2'
services:
mongodb:
image: docker.io/bitnami/mongodb:4.2
volumes:
- 'mongodb_data:/bitnami/mongodb'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MONGODB_USERNAME=bn_parse
- MONGODB_DATABASE=bitnami_parse
- MONGODB_PASSWORD=bitnami123
parse:
image: docker.io/bitnami/parse:4
ports:
- '1337:1337'
volumes:
- 'parse_data:/bitnami/parse'
depends_on:
- mongodb
environment:
- PARSE_DATABASE_HOST=mongodb
- PARSE_DATABASE_PORT_NUMBER=27017
- PARSE_DATABASE_USER=bn_parse
- PARSE_DATABASE_NAME=bitnami_parse
- PARSE_DATABASE_PASSWORD=bitnami123
parse-dashboard:
image: docker.io/bitnami/parse-dashboard:3
ports:
- '80:4040'
volumes:
- 'parse_dashboard_data:/bitnami'
depends_on:
- parse
volumes:
mongodb_data:
driver: local
parse_data:
driver: local
parse_dashboard_data:
driver: local
What am i missing here ?
The parse-dashboard knows the parse backend through its docker-compose hostname parse.
So after login, the parse-dashboard (UI), will generate requests to that host http://parse:1337/parse/serverInfo based on the default parse backend hostname. More details about this here.
The problem is that your browser (host computer) doesn't know how to resolve the ip for the hostname parse. Hence the name resolution errors.
As a workaround, you can add an entry to your hosts file to have the parse hostname resolved to 127.0.0.1.
This post describes it well: Linked docker-compose containers making http requests

Google Cloud Run health check fails with Docker-Compose

I am trying to upload my backend to Google Cloud Run. I'm using Docker-Compose with 2 components: a Golang Server and a Postgres DB.
When I run Docker-Compose locally, everything works great! When I upload to Gcloud with
gcloud builds submit . --tag gcr.io/BACKEND_NAME
gcloud run deploy --image gcr.io/BACKEND_NAME --platform managed
Gcloud's health check fails, getting stuck on Deploying... Revision deployment finished. Waiting for health check to begin. and throws Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I understand that Google Cloud Run provides a PORT env variable, which I tried to account for in my docker-compose.yml. But the command still fails. I'm out of ideas, what could be wrong here?
Here is my docker-compose.yml
version: '3'
services:
db:
image: postgres:latest # use latest official postgres version
container_name: db
restart: "always"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
api:
container_name: api
depends_on:
- db
restart: on-failure
build: .
ports:
# Bind GCR provided incoming PORT to port 8000 of our api
- "${PORT}:8000"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data: # named volumes can be managed easier using docker-compose
and the api container is a Golang binary, which waits for a connection to be made with the Postgres DB before calling http.ListenAndServe(":8000", handler).

Setup of Cyberark Conjur server

I've created a project in node.js to store and fetch credentials from cyberark conjur (using its REST-API)
But to test the application I'm stumbling to setup conjur server.
Problem is server is running fine within docker container, but how to access it outside(host machine) (port mapping is not working)
Or is there any conjur server hosted on Internet for public usage
All I want is to test API calls
As of writing this, the Conjur Node.js API is not currently actively being supported. Here are some suggestions for testing the API's.
Can I see the command you're using to start docker/docker-compose file?
Method 1
If you're using the setup from the Conjur Quickstart Guide, your docker-compose.yml file should look something like:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
depends_on:
- database
restart: on-failure
proxy:
image: nginx:1.13.6-alpine
container_name: nginx_proxy
ports:
- "8443:443"
volumes:
- ./conf/:/etc/nginx/conf.d/:ro
- ./conf/tls/:/etc/nginx/tls/:ro
depends_on:
- conjur
- openssl
restart: on-failure
...
This means Conjur is running behind an NGINX proxy to handle the SSL and does not have a port exposed to outside the Docker network it is running on. With this setup you can access the Conjur Server on https://localhost:8443 on your local machine.
Note: You will need the SSL cert located in ./conf/tls/. Since this is a demo environment, these are made readily available for testing like this.
Method 2
If you do not care about security and are just purely testing the REST API endpoints, you could always cut out the SSL and just modify the docker-compose.yml to expose the Conjur server's port to your local machine like this:
...
conjur:
image: cyberark/conjur
container_name: conjur_server
command: server
environment:
DATABASE_URL: postgres://postgres#database/postgres
CONJUR_DATA_KEY:
CONJUR_AUTHENTICATORS:
ports:
- "8080:80"
depends_on:
- database
restart: on-failure
Now you'll be able to talk to the Conjur Server on your local machine through http://localhost:8080.
For more info: Networking in Docker Compose docs

Resources