I am working on using 'MLflow' project and one use case is like this.
The MLflow running target/environment is docker.
Data lives on aws s3
When developing on a laptop. The laptop has an aws profile to access data.
(When developing on EC2, the EC2 have role attached to access s3)
Currently, I have credentials stored on the host as '~/.aws/credential', and can access s3 in the host. Question is: In MLflow project, how do I make program running on docker access s3 files?
Note that the question is not "in general" how to setup docker. The question is the recommended way to do the aws setup/configuration in a MLflow project. Thanks!
You can use a volume, for application data.
Specifically, for aws credentials, you can mount the credentials directory itself,
Obviously, you'll need to make sure to install any required dependencies for aws or mlflow. But here are the required parts for adding a user and mounting the credentials as a volume.
First, in your Dockerfile,
# add user with home directory
RUN useradd -m mlflow
# set default user
USER mlflow
# set working directory
WORKDIR /home/mlflow
Then to mount during run,
docker run -it -v "${HOME}"/.aws:/home/mlflow/.aws \
mlflow
Note: make sure to never hard-code credentials inside of any Docker containers.
Related
We are pre building the Shopware 6 code (composer install, storefront + admin build), but not the theme build and copy it into a docker container.
What is the best way to generate or supply the JWT secrets when running such a prebuilt container.
Normally we would do a
bin/console secrets:generate-keys
bin/console system:generate-jwt-secret
on the first installation.
But can this secrets also be kept in an ENV variable to avoid the need for a persitent /var volume?
You can override secrets locally as described here.
So in theory:
Run secrets:generate-keys to generate keys once.
Run secrets:decrypt-to-local to get the secrets added to your env file.
Run secrets:encrypt-from-local on deployment to set secrets from your env your file.
I have .env file for my docker-compose, and was able to run using "docker-compose up"
Now I pushed to cloud registry, and want to Cloud Run
How can I supply the various environemnt variables?
I did create secrets in secret manager, but how can I integrate both, so that my container starts reading all those needed secrets?
Note: My docker-compose is an app with database, but I can split them as 2 containers, if needed, but they still need secrets
Edit: Added secret references.
EDIT:
I am unable to run my container
If env file X=x , and docker-compose environemnt app.prop=${X}
then should I create secret X or x?
Is Cloud run using Dockerfile or docker-compose? I image pushed is built from docker-compose only. Sorry I am getting confused (not assuming trivial things as it is not working)
It is not possible to use docker-compose on Cloud Run, as it is designed for individual stateless containers. My suggestion is to create an image from your application service, upload the image to Google Container Registry in order to use it for your Cloud Run service, and connect it to Cloud SQL following the attached documentation. You can store database credentials with Secret Manager and pass it to your Cloud Run service as environment variables (check this documentation).
Do they use environment / config variables to link the persistent storage to the project related docker image ?
So that everytime new VM is assigned, the cloud shell image can be run with those user specific values ?
Not sure to have caught all your questions and concerns. So, Cloud Shell is in 2 parts:
The container that contains all the installed library, language support/sdk, binaries (docker for example). This container is stateless and you can change it (in the setting section of Cloud Shell) if you want to deploy a custom container. For example, it's what is done with Cloud Run Button for deploying a Cloud Run service automatically.
The volume dedicated to the current user that is mounted in the Cloud Shell container.
By the way, you can easily deduce that all you store outside the /home/<user> directory is stateless and not persist. /tmp directory, docker image (pull or created),... all of these are lost when the Cloud Shell start on other VM.
Only the volume dedicated to the user is statefull, and limited to 5Gb. It's linux environment and you can customize the .profile and .bash_rc files as you want. You can store keys in /.ssh/ directory and all the other tricks that you can do on Linux in your /home directory.
I am planning to use WSO2 API Manager for a client...Planning to use the API Manager Docker image for hosting it..
But it looks like to use API Manager docker image ,I need to have paid subscription once the trial period ends..
https://wso2.com/api-management/install/docker/get-started/ ..the link says
" In order to use WSO2 product Docker images, you need an active WSO2 subscription."
Is it like that?
Cant i have the image running in the client premises without any subscription?
You can build it yourself using their official dockerfiles which hosted on github and then push it to your own registry.
The rest of the dockerfiles for other WSO2 Products can be found under the same github account.
The following steps are describing How to build an image and run WSO2 API Manager, taken from this README.md file.
Checkout this repository into your local machine using the following Git command.
git clone https://github.com/wso2/docker-apim.git
The local copy of the dockerfiles/ubuntu/apim directory will be referred to as AM_DOCKERFILE_HOME from this point onwards.
Add WSO2 API Manager distribution and MySQL connector to <AM_DOCKERFILE_HOME>/files.
Download WSO2 API Manager v2.6.0
distribution and extract it to <AM_DOCKERFILE_HOME>/files.
Download MySQL Connector/J
and copy that to <AM_DOCKERFILE_HOME>/files.
Once all of these are in place, it should look as follows:
<AM_DOCKERFILE_HOME>/files/wso2am-2.6.0/
<AM_DOCKERFILE_HOME>/files/mysql-connector-java-<version>-bin.jar
Please refer to WSO2 Update Manager documentation
in order to obtain latest bug fixes and updates for the product.
Build the Docker image.
Navigate to <AM_DOCKERFILE_HOME> directory.
Execute docker build command as shown below.
docker build -t wso2am:2.6.0 .
Running the Docker image.
docker run -it -p 9443:9443 wso2am:2.6.0
Here, only port 9443 (HTTPS servlet transport) has been mapped to a Docker host port.
You may map other container service ports, which have been exposed to Docker host ports, as desired.
Accessing management console.
To access the management console, use the docker host IP and port 9443.
https://<DOCKER_HOST>:9443/carbon
In here, refers to hostname or IP of the host machine on top of which containers are spawned.
How to update configurations
Configurations would lie on the Docker host machine and they can be volume mounted to the container.
As an example, steps required to change the port offset using carbon.xml is as follows.
Stop the API Manager container if it's already running. In WSO2 API Manager 2.6.0 product distribution, carbon.xml configuration file
can be found at <DISTRIBUTION_HOME>/repository/conf. Copy the file to some suitable location of the host machine, referred to as <SOURCE_CONFIGS>/carbon.xml and change the offset value under ports to 1.
Grant read permission to other users for <SOURCE_CONFIGS>/carbon.xml
chmod o+r <SOURCE_CONFIGS>/carbon.xml
Run the image by mounting the file to container as follows.
docker run \
-p 9444:9444 \
--volume <SOURCE_CONFIGS>/carbon.xml:<TARGET_CONFIGS>/carbon.xml \
wso2am:2.6.0
In here, refers to /home/wso2carbon/wso2am-2.6.0/repository/conf folder of the container.
As explained above these steps for ubuntu, for other distributions you can check the following directory and then read the README.md file inside
You can build the docker images yourself. Follow the instructions given at https://github.com/wso2/docker-apim/tree/master/dockerfiles/ubuntu/apim#how-to-build-an-image-and-run.
Thes caveat is that you will not be getting any bug fixes if you do not have a subscription.
I am using Docker to deploy my ASP.NET Core Web API microservices, and am looking at the options for injecting configuration into each container. The standard way of using an appsettings.json file in the application root directory is not ideal, because as far as I can see, that means building the file into my docker images, which would then limit which environment the image could run in.
I want to build an image once which can they be provided configuration at runtime and rolled through the dev, test UAT and into Production without creating an image for each environment.
Options seem to be:
Providing config via environment variables. Seems a bit tedious.
Somehow mapping a path in the container to a standard location on the host server where appsettings.json sits, and getting the service to pick this up (how?)
May be possible to provide values on the docker run command line?
Does anyone have experience with this? Could you provide code samples/directions, particularly on option 2) which seems the best at the moment?
It's possible to create data volumes in the docker image/container. And also mount a host directory into a container. The host directory will then by accessible inside the container.
Adding a data volume
You can add a data volume to a container using the -v flag with the docker create and docker run command.
$ docker run -d -P --name web -v /webapp training/webapp python app.py
This will create a new volume inside a container at /webapp.
Mount a host directory as a data volume
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp.
Refer to the Docker Data Volumes
We are using other packaging system for now (not docker itself), but still have same issue - package can be deployed in any environment.
So, the way we are doing it now:
Use External configuration management system to hold and manage configuration per environment
Inject to our package the basic environment variables to hold the configuration management system connection details
This way we are not only allowing the package to run in almost any "known" environment, but also run-time configuration management.
When you are running docker, you can use environment variable options of the run command:
$ docker run -e "deep=purple" ...