Docker command for multiple tensorflow serving model? - docker

I'm trying to execute this normal tf_serving command (which work correctly) with docker version of tf_serving. I'm not sure why it's not working.. Any suggestion? I'm new to Docker!
Normal tf_serving command:
tensorflow_model_server \
--model_config_file=/opt/tf_serving/model_config.conf \
--port=6006
here is what my model_config.conf looks like:
model_config_list: {
config: {
name: "model_1",
base_path: "/opt/tf_serving/model_1",
model_platform: "tensorflow",
},
config: {
name: "model_2",
base_path: "/opt/tf_serving/model_2",
model_platform: "tensorflow",
},
}
Docker version of command that I'm trying but not working:
docker run --runtime=nvidia \
-p 6006:6006 \
--mount type=bind,source=/opt/tf_serving/model_1,target=/models/model_1/ \
--mount type=bind,source=/opt/tf_serving/model_2,target=/models/model_2/ \
--mount type=bind,source=/opt/tf_serving/model_config.conf,target=/config/model_config.conf \
-t tensorflow/serving:latest-gpu --model_config_file=/config/model_config.conf
Error:
2019-04-13 19:41:00.838340: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /opt/tf_serving/model_1 for servable model_1

Found the issue! You have to change the models path in model_config.conf as follow, and the above docker command will work and load both models!
model_config_list: {
config: {
name: "model_1",
base_path: "/models/model_1",
model_platform: "tensorflow",
},
config: {
name: "model_2",
base_path: "/models/model_2",
model_platform: "tensorflow",
},
}
EDIT: corrected typo on base_path for model_2.

Related

No internet connection when building image

I created a custom configuration for docker daemon, here is the config
{
"bip" : "192.168.1.32/24",
"default-address-pools" : [
{
"base" : "172.16.200.0/24",
"size" : 24
},
{
"base" : "172.16.25.0/24",
"size" : 24
}
],
"debug" : true,
"hosts" : ["tcp://127.0.0.69:4269", "unix:///var/run/docker.sock"],
"default-gateway" : "192.168.1.1",
"dns" : ["8.8.8.8", "8.8.4.4"],
"experimental" : true,
"log-driver" : "json-file",
"log-opts" : {
"max-size" : "20m",
"max-file" : "3",
"labels" : "develope_status",
"env" : "developing"
}
}
bip is my host IP address and default-gateway is my router gateway
, I created 2 address pools so it can assign IP address to the container
But during the build process, the image has no internet so it can't do apk update.
Here is my docker-compose file
version: "3"
services:
blog:
build: ./
image: blog:1.0
ports:
- "456:80"
- "450:443"
volumes:
- .:/blog
- ./logs:/var/log/apache2
- ./httpd-ssl.conf:/usr/local/apache2/conf/extra/httpd-ssl.conf
container_name: blog
hostname: alpine
command: apk update
networks:
default:
external:
name: vpc
Here is the Dockerfile
FROM httpd:2.4-alpine
ENV REFRESHED_AT=2021-02-01 \
APACHE_RUN_USER=www-data \
APACHE_RUN_GROUP=www-data \
APACHE_LOG_DIR=/var/log/apache2 \
APACHE_PID_FILE=/var/run/apache2.pid \
APACHE_RUN_DIR=/var/run/apache2 \
APACHE_LOCK_DIR=/var/lock/apache2
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR \
&& sed -i \
-e 's/^#\(Include .*httpd-ssl.conf\)/\1/' \
-e 's/^#\(LoadModule .*mod_ssl.so\)/\1/' \
-e 's/^#\(LoadModule .*mod_socache_shmcb.so\)/\1/' \
conf/httpd.conf \
&& echo "Include /blog/blog.conf" >> /usr/local/apache2/conf/httpd.conf
VOLUME ["/blog", "$APACHE_LOG_DIR"]
EXPOSE 80 443
The running container was able to ping google and do apk update like normal but if I put RUN apk update inside the dockerfile it won't update.
Any help would be great, thank you

MLFlow run passing Google Application credentials

I want to pass my GOOGLE_APPLICATION_CREDENTIALS environmental variable when I run mlflow run using a Docker container:
This is my current docker run when using mlflow run:
Running command 'docker run --rm -e MLFLOW_RUN_ID=f18667e37ecb486cac4631cbaf279903 -e MLFLOW_TRACKING_URI=http://3.1.1.11:5000 -e MLFLOW_EXPERIMENT_ID=0 mlflow_gcp:33156ee python -m trainer.task --job-dir /tmp/ \
--num-epochs 10 \
--train-steps 1000 \
--eval-steps 1 \
--train-files gs://cloud-samples-data/ml-engine/census/data/adult.data.csv \
--eval-files gs://cloud-samples-data/ml-engine/census/data/adult.test.csv \
--batch-size 128
This is how I would normally pass it:
docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/[FILE_NAME].json
What is the best way to option to pass this value to mlflow? I'm writing files in GCS and Docker requires access to GCP.
MLproject contents
name: mlflow_gcp
docker_env:
image: mlflow-gcp-example
entry_points:
main:
parameters:
job_dir:
type: string
default: '/tmp/'
num_epochs:
type: int
default: 10
train_steps:
type: int
default: 1000
eval_steps:
type: int
default: 1
batch_size:
type: int
default: 64
train_files:
type: string
default: 'gs://cloud-samples-data/ml-engine/census/data/adult.data.csv'
eval_files:
type: string
default: 'gs://cloud-samples-data/ml-engine/census/data/adult.test.csv'
mlflow_tracking_uri:
type: uri
default: ''
command: |
python -m trainer.task --job-dir {job_dir} \
--num-epochs {num_epochs} \
--train-steps {train_steps} \
--eval-steps {eval_steps} \
--train-files {train_files} \
--eval-files {eval_files} \
--batch-size {batch_size} \
--mlflow-tracking-uri {mlflow_tracking_uri}
I already tried in Python file and fails since Docker has no access to local file system:
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/user/key.json"

TensorFlow Serving multiple models via docker

I am unable to run 2 or more models via TensorFlow Serving via docker on a Windows 10 machine.
I have made a models.config file
model_config_list: {
config: {
name: "ukpred2",
base_path: "/models/my_models/ukpred2",
model_platform: "tensorflow"
},
config: {
name: "model3",
base_path: "/models/my_models/ukpred3",
model_platform: "tensorflow"
}
}
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\temp\models\,target=/models/my_models --mount type=bind,source=C:\Users\th3182\Documents\temp\models.config,target=/models/models.config -t tensorflow/serving --model_config_file=/models/models.config
In C:\Users\th3182\Documents\temp\models are 2 folders ukpred2 and ukpred3 in these folders are the exported folders from the trained models ie 1536668276 which contains an assets folder a variables folder and a saved_model.ph file.
The error I get is
2018-09-13 15:24:50.567686: I tensorflow_serving/model_servers/main.cc:157] Building single TensorFlow model file config: model_name: model model_base_path: /models/model
2018-09-13 15:24:50.568209: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2018-09-13 15:24:50.568242: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: model
2018-09-13 15:24:50.568640: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /models/model for servable model
I can't seem to get this to work with the alterations on the above. But I have managed to server a single model with the following command
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\projects\Better_Buyer2\model2\export\exporter,target=/models/model2 -e MODEL_NAME=model2 -t tensorflow/serving
You'll have to wait for the next release (1.11.0) for this to work. In the interim, you can use the image tensorflow/serving:nightly or tensorflow/serving:1.11.0-rc0
In tensorflow serving 2.6.0, Model Server Config Details for multiple models:
model_config_list {
config {
name: 'my_first_model'
base_path: '/tmp/my_first_model/'
model_platform: 'tensorflow'
}
config {
name: 'my_second_model'
base_path: '/tmp/my_second_model/'
model_platform: 'tensorflow'
}
}
Example: Run multiple models using tensorflow/serving
docker run -p 8500:8500 \
-p 8501:8501 \
--mount type=bind,source=/tmp/models,target=/models/my_first_model \
--mount type=bind,source=/tmp/models,target=/models/my_second_model \
--mount type=bind,source=/tmp/model_config,\
target=/models/model_config \
-e MODEL_NAME=my_first_model \
-t tensorflow/serving \
--model_config_file=/models/model_config
For more information please refer Model Server Configuration

Docker: How to create a table for local dynamo DB?

I'm trying to create a docker container with local Amazon Dynamo DB. And it actually works. But I can not understand how to create a table for this image in Docker file?
Through the javascript I create a table like this:
var params = {
TableName: 'UserActivity',
KeySchema: [
{
AttributeName: 'user_id',
KeyType: 'HASH',
},
{
AttributeName: 'user_action',
KeyType: 'RANGE',
}
],
AttributeDefinitions: [
{
AttributeName: 'user_id',
AttributeType: 'S',
},
{
AttributeName: 'user_action',
AttributeType: 'S',
}
],
ProvisionedThroughput: {
ReadCapacityUnits: 2,
WriteCapacityUnits: 2,
}
};
dynamodb.createTable(params, function(err, data) {
if (err) ppJson(err); // an error occurred
else ppJson(data); // successful response
});
And here is my Docker file:
FROM openjdk:8-jre-alpine
ENV DYNAMODB_VERSION=latest
RUN apk add --update curl && \
rm -rf /var/cache/apk/* && \
curl -O https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_${DYNAMODB_VERSION}.tar.gz && \
tar zxvf dynamodb_local_${DYNAMODB_VERSION}.tar.gz && \
rm dynamodb_local_${DYNAMODB_VERSION}.tar.gz
EXPOSE 8000
ENTRYPOINT ["java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "--sharedDb", "-inMemory", "-port", "8000"]
You need to point your DynamoDB client to the local DynamoDB endpoint.
Do something like:
dynamodb = new AWS.DynamoDB({
endpoint: new AWS.Endpoint('http://localhost:8000')
});
dynamodb.createTable(...);
UPDATE
Alternatively you can use AWS CLI to create tables in your local DynamoDB without using JavaSciprt code. You need to use the "--endpoint-url" to point CLI to your local instance like this:
aws dynamodb list-tables --endpoint-url http://localhost:8000
You need to use create-table command to create a table.
The main problem was in base image which I used for local dynamodb. I mean openjdk:8-jre-alpine
Looks like the alpine distribution does not have something important for functioning of local dynamodb.
So here is a working Dockerfile for local dynamodb:
FROM openjdk:8-jre
ENV DYNAMODB_VERSION=latest
COPY .aws/ root/.aws/
RUN curl -O https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_${DYNAMODB_VERSION}.tar.gz && \
tar zxvf dynamodb_local_${DYNAMODB_VERSION}.tar.gz && \
rm dynamodb_local_${DYNAMODB_VERSION}.tar.gz
EXPOSE 8000
ENTRYPOINT ["java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "--sharedDb", "-inMemory"]
I'd be happy to read what is the main problem of alpine in this particular case.

Spring Cloud Config Server Won't Serve from Local Filesystem

Note: I've read this and this question. Neither helped.
I've created a Spring config server Docker image. The intent is to be able to run multiple containers with different profiles and search locations. This is where it differs from the questions above, where the properties were either loaded from git or from classpath locations known to the config server at startup. My config server is also deployed traditionally in Tomcat, not using a repackaged boot jar.
When I access http://<docker host>:8888/movie-service/native, I get no content (see below). The content I'm expecting is given at the end of the question.
{"name":"movie-service","profiles":["native"],"label":"master","propertySources":[]}
I've tried just about everything under the sun but just can't get it to work.
Config server main:
#SpringBootApplication
#EnableConfigServer
#EnableDiscoveryClient
public class ConfigServer extends SpringBootServletInitializer {
/* Check out the EnvironmentRepositoryConfiguration for details */
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(ConfigServer.class);
}
}
Config server application.yml:
spring:
cloud:
config:
enabled: true
server:
git:
uri: ${CONFIG_LOCATION}
native:
searchLocations: ${CONFIG_LOCATION}
server:
port: ${HTTP_PORT:8080}
Config server bootstrap.yml:
spring:
application:
name: config-service
eureka:
instance:
hostname: ${CONFIG_HOST:localhost}
preferIpAddress: false
client:
registerWithEureka: ${REGISTER_WITH_DISCOVERY:true}
fetchRegistry: false
serviceUrl:
defaultZone: http://${DISCOVERY_HOST:localhost}:${DISCOVERY_PORT:8761}/eureka/
Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native xxx
Config Directory:
/Users/<some path>/config
--- movie-service.yml
Content of movie-service.yml:
themoviedb:
url: http://api.themoviedb.org
Figured this out myself. The Docker container doesn't have access to the host file system unless the file system directory is mounted at runtime using a -v flag.
For the sake of completeness, this is the full working Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native \
-v "/Users/<some path>/config/:/Users/<some path>/config/" \
xxx

Resources