I am unable to run 2 or more models via TensorFlow Serving via docker on a Windows 10 machine.
I have made a models.config file
model_config_list: {
config: {
name: "ukpred2",
base_path: "/models/my_models/ukpred2",
model_platform: "tensorflow"
},
config: {
name: "model3",
base_path: "/models/my_models/ukpred3",
model_platform: "tensorflow"
}
}
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\temp\models\,target=/models/my_models --mount type=bind,source=C:\Users\th3182\Documents\temp\models.config,target=/models/models.config -t tensorflow/serving --model_config_file=/models/models.config
In C:\Users\th3182\Documents\temp\models are 2 folders ukpred2 and ukpred3 in these folders are the exported folders from the trained models ie 1536668276 which contains an assets folder a variables folder and a saved_model.ph file.
The error I get is
2018-09-13 15:24:50.567686: I tensorflow_serving/model_servers/main.cc:157] Building single TensorFlow model file config: model_name: model model_base_path: /models/model
2018-09-13 15:24:50.568209: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2018-09-13 15:24:50.568242: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: model
2018-09-13 15:24:50.568640: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /models/model for servable model
I can't seem to get this to work with the alterations on the above. But I have managed to server a single model with the following command
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\projects\Better_Buyer2\model2\export\exporter,target=/models/model2 -e MODEL_NAME=model2 -t tensorflow/serving
You'll have to wait for the next release (1.11.0) for this to work. In the interim, you can use the image tensorflow/serving:nightly or tensorflow/serving:1.11.0-rc0
In tensorflow serving 2.6.0, Model Server Config Details for multiple models:
model_config_list {
config {
name: 'my_first_model'
base_path: '/tmp/my_first_model/'
model_platform: 'tensorflow'
}
config {
name: 'my_second_model'
base_path: '/tmp/my_second_model/'
model_platform: 'tensorflow'
}
}
Example: Run multiple models using tensorflow/serving
docker run -p 8500:8500 \
-p 8501:8501 \
--mount type=bind,source=/tmp/models,target=/models/my_first_model \
--mount type=bind,source=/tmp/models,target=/models/my_second_model \
--mount type=bind,source=/tmp/model_config,\
target=/models/model_config \
-e MODEL_NAME=my_first_model \
-t tensorflow/serving \
--model_config_file=/models/model_config
For more information please refer Model Server Configuration
Related
I can run Keycloak with the following command
./bin/kc.sh start-dev \
--https-certificate-file=/etc/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/etc/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
Works as expected
On the same computer, I try to run using Docker
docker run -p 80:8080 -p 443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
quay.io/keycloak/keycloak:latest \
start-dev \
--https-certificate-file=/ect/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/ect/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
It fails
2022-12-23 23:11:59,784 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-12-23 23:11:59,785 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: /ect/letsencrypt/live/keycloak.fhir-poc.hcs.us.com/cert.pem
2022-12-23 23:11:59,787 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) Key material not provided to setup HTTPS. Please configure your keys/certificates.
Any suggestions besides a reverse proxy?
The problem is based on the directory linked structure of letsencrypt in linux and the permissions to access these files.
Letsencrypt linked directory structure works like:
/etc/letsencrypt/live/<your-domain/.pem -> /etc/letsencrypt/archive/<your-domain/.pem
The problem is the link from the live to the archive folder/file.
The permissions are mostly not correct.
A quick-fix is create a cert-mirror and copy the related files from /etc/letsencrypt/live/<your-domain/*.pem
to a new cert folder like /opt/cert
change permissions in /opt/cert to 777: chmod 777 -R /opt/certs
create a cron.monthly job in /etc/cron.monthly which copy the files to /opt/certs + change permissions correctly every month that your certs mirror always up-to-date
This will make your example working. Please keep in mind that permissions like 777 are let everyone access this file. You should use the correct permissions in productive environment.
I discovered the answer
letsencrypt certificates in the "live" folder are symlinks to the "archive" folder and I needed a custom docker image for keycloak to mount my certificates. So I followed the keycloak docs for creating a custom docker image and started a container with that image
Following
https://www.keycloak.org/server/containers
https://eff-certbot.readthedocs.io/en/stable/using.html#where-are-my-certificates
to build a custom image and change the cert permissions
Dockerfile
FROM quay.io/keycloak/keycloak:latest as builder
ENV KEYCLOAK_ADMIN=root
ENV KEYCLOAK_ADMIN_PASSWORD=change_me
WORKDIR /opt/keycloak
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
COPY kc-export.json /opt/keycloak/kc-export.json
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak/kc-export.json
VOLUME [ "/opt/keycloak/certs" ]
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Then start the container
docker run -p 8443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
my-keycloak-image:latest \
start-dev \
--https-certificate-file=/opt/keycloak/certs/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/opt/keycloak/certs/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
I'm trying to execute this normal tf_serving command (which work correctly) with docker version of tf_serving. I'm not sure why it's not working.. Any suggestion? I'm new to Docker!
Normal tf_serving command:
tensorflow_model_server \
--model_config_file=/opt/tf_serving/model_config.conf \
--port=6006
here is what my model_config.conf looks like:
model_config_list: {
config: {
name: "model_1",
base_path: "/opt/tf_serving/model_1",
model_platform: "tensorflow",
},
config: {
name: "model_2",
base_path: "/opt/tf_serving/model_2",
model_platform: "tensorflow",
},
}
Docker version of command that I'm trying but not working:
docker run --runtime=nvidia \
-p 6006:6006 \
--mount type=bind,source=/opt/tf_serving/model_1,target=/models/model_1/ \
--mount type=bind,source=/opt/tf_serving/model_2,target=/models/model_2/ \
--mount type=bind,source=/opt/tf_serving/model_config.conf,target=/config/model_config.conf \
-t tensorflow/serving:latest-gpu --model_config_file=/config/model_config.conf
Error:
2019-04-13 19:41:00.838340: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /opt/tf_serving/model_1 for servable model_1
Found the issue! You have to change the models path in model_config.conf as follow, and the above docker command will work and load both models!
model_config_list: {
config: {
name: "model_1",
base_path: "/models/model_1",
model_platform: "tensorflow",
},
config: {
name: "model_2",
base_path: "/models/model_2",
model_platform: "tensorflow",
},
}
EDIT: corrected typo on base_path for model_2.
I'm new to Tensorflow serving,
I just tried Tensorflow serving via docker with this tutorial and succeeded.
However, when I tried it with multiple versions, it serves only the latest version.
Is it possible to do that? Or do I need to try something different?
This require a ModelServerConfig, which will be supported by the next docker image tensorflow/serving release 1.11.0 (available since 5. Okt 2018). Until then, you can create your own docker image, or use tensorflow/serving:nightly or tensorflow/serving:1.11.0-rc0 as stated here.
See that thread for how to implement multiple models.
If you on the other hand want to enable multiple versions of a single model, you can use the following config file called "models.config":
model_config_list: {
config: {
name: "my_model",
base_path: "/models/my_model",
model_platform: "tensorflow",
model_version_policy: {
all: {}
}
}
}
here "model_version_policy: {all:{ } }" make every versions of the model available.
Then run the docker:
docker run -p 8500:8500 8501:8501 \
--mount type=bind,source=/path/to/my_model/,target=/models/my_model \
--mount type=bind,source=/path/to/my/models.config,target=/models/models.config \
-t tensorflow/serving:nightly --model_config_file=/models/models.config
Edit:
Now that version 1.11.0 is available, you can start by pulling the new image:
docker pull tensorflow/serving
Then run the docker image as above, using tensorflow/serving instead of tensorflow/serving:nightly.
I found a way to achieve this by building my own docker image which uses --model_config_file option instead of --model_name and --model_base_path.
So I'm running tensorflow serving with below command.
docker run -p 8501:8501 -v {local_path_of_models.conf}:/models -t {docker_iamge_name}
Of course, I wrote 'models.conf' for multiple models also.
edit:
Below is what I modified from original docker file.
original version:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \
modified version:
tensorflow_model_server --port=8500 --rest_api_port=8501 \
--model_config_file=${MODEL_BASE_PATH}/models.conf \
The Readme on https://github.com/swagger-api/swagger-ui specifies that Swagger-UI can be run with your own file like this
docker run -p 80:8080 -e SWAGGER_JSON=/foo/swagger.json -v /bar:/foo swaggerapi/swagger-ui
which works I if I translate it to
docker build . -t swagger-ui-local && \
docker run -p 80:8080 -e SWAGGER_JSON=/foo/my-file.json -v $PWD:/foo swagger-ui-local
This, however, ignores my local changes.
I can run my local changes with
npm run dev
but I can't figure out how to get this dev server to run anything else than the Petstore example.
Can anyone help me combine the two, so I can run swagger-ui with local code changes AND my own swagger.json?
Make sure you are volume mounting the correct local directory.
Locally, I had my swagger config in $PWD/src/app/swagger/swagger.yaml. Running the following worked fine:
docker run -p 80:8080 -e SWAGGER_JSON=/tmp/swagger.yaml -v `pwd`/src/app/swagger:/tmp swaggerapi/swagger-ui
Simply refreshing the Swagger-UI page or clicking the "Explore" button in the header triggered a refresh of the data from my YAML file.
You can also specify BASE_URL excerpt from swagger-installation
docker run -p 80:8080 -e BASE_URL=/swagger -e SWAGGER_JSON=/foo/swagger.json -v /bar:/foo swaggerapi/swagger-ui
I found this topic because I wanted to see a visual representation of my local swagger file, but could not seem to get swagger-ui (running in docker) to display anything other than the petstore.
Ultimately, my issue was with understanding the -e SWAGGER_JSON and -v flags, so I wanted to explain them here.
-v <path1>:<path2>
This option says "Mount the path <path1> from my local file system within the swagger-ui docker container on path <path2>"
-e SWAGGER_JSON=<filepath>
This option says "By default, show the swagger for the file at <filepath> using the docker container's file system." The important part here, is that this filepath should take into account how you set <path2> above
Putting it all together, I ended up with the following:
docker run -p 8085:8080 -e SWAGGER_JSON=/foo/swagger.json -v `pwd`:/foo swaggerapi/swagger-ui
This says in english: "Run my swagger-ui instance on port 8085. Mount my current working directory as '/foo' in the docker container. By default, show the swagger file at '/foo/swagger.json'."
The important thing to note is that I have a file called swagger.json in my current working directory. This command mounts my current working directory as /foo in the docker container. Then, swagger UI can pick up my swagger.json as /foo/swagger.json.
Here's how I ended up solving this, it also allows you to have multiple YML files:
docker run -p 80:8080 \
-e URLS_PRIMARY_NAME=FIRST \
-e URLS="[ \
{ url: 'docs/first.yml', name: 'FIRST' } \
, { url: 'docs/second.yml', name: 'SECOND' } \
]" \
-v `pwd`:/usr/share/nginx/html/docs/ \
swaggerapi/swagger-ui
I figured it out for npm run dev:
Place my-file.json in the dev-helpers folder. Then it's available from the search bar in on http://localhost:3200/.
To load it automatically when opening the server, alter dev-helpers/index.html by changing
url: "http://petstore.swagger.io/v2/swagger.json"
to
url: "my-file.json"
Just in case you are running a maven project with Play Framework the following steps solved my issue :
1.) Alter the conf/routes file. Add the below line :
GET /swagger.json controllers.Assets.at(path="/public/swagger-ui",file="swagger.json")
2.) Add the swagger.json file to your Swagger-UI folder
so when you run the mvn project in a port example 7777, start the play server using mvn play2:run and then, localhost:7777/docs will automatically pull the Json file that is added locally.
Docker compose solution:
create .env file and add the following:
URLS_PRIMARY_NAME=FIRST
URLS=[ { url: 'docs/swagger.yaml', name: 'FIRST' } ]
And create a docker-compose file with contents below:
version: "3.3"
services:
swagger-ui:
image: swaggerapi/swagger-ui
container_name: "swagger-ui"
ports:
- "80:8080"
volumes:
- /local/tmp:/usr/share/nginx/html/docs/
environment:
- URLS_PRIMARY_NAME=${URLS_PRIMARY_NAME}
- URLS=${URLS}
the swagger.yaml is at /local/tmp.
For people facing this issue in mac, its a permission problem. By default after Catalina, docker doesn't have permission to allow its images to read local files in your system. Once its given it worked for me and it took my local swagger json file.
To grant privileges now, go to System preferences > Security & Privacy > Files and Folders, and add Docker for Mac and your shared directory.
Another solution if you want to provide multiple URLs and from a specific folder (not default /usr/share/nginx/html/docs/):
docker run -p 80:8080 \
-e SWAGGER_JSON=/docs/api.yaml \
-e URLS="[ \
{ url: '/api1.yaml', name: 'API 1' }, \
{ url: '/api2.yaml', name: 'API 2' } \
]" \
-v `pwd`/docs:/docs \
swaggerapi/swagger-ui
Or for docker compose:
version: '3.8'
services:
swagger-ui:
image: swaggerapi/swagger-ui
volumes:
- ./docs:/docs
environment:
SWAGGER_JSON: /docs/api.yaml
URLS: '[{ url: "/api1.yaml", name: "API 1" }, { url: "/api2.yaml", name: "API 2" }]'
Please note, SWAGGER_JSON requires an absolute path, URLs in URLS require relative paths from the specified volume
Note: I've read this and this question. Neither helped.
I've created a Spring config server Docker image. The intent is to be able to run multiple containers with different profiles and search locations. This is where it differs from the questions above, where the properties were either loaded from git or from classpath locations known to the config server at startup. My config server is also deployed traditionally in Tomcat, not using a repackaged boot jar.
When I access http://<docker host>:8888/movie-service/native, I get no content (see below). The content I'm expecting is given at the end of the question.
{"name":"movie-service","profiles":["native"],"label":"master","propertySources":[]}
I've tried just about everything under the sun but just can't get it to work.
Config server main:
#SpringBootApplication
#EnableConfigServer
#EnableDiscoveryClient
public class ConfigServer extends SpringBootServletInitializer {
/* Check out the EnvironmentRepositoryConfiguration for details */
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(ConfigServer.class);
}
}
Config server application.yml:
spring:
cloud:
config:
enabled: true
server:
git:
uri: ${CONFIG_LOCATION}
native:
searchLocations: ${CONFIG_LOCATION}
server:
port: ${HTTP_PORT:8080}
Config server bootstrap.yml:
spring:
application:
name: config-service
eureka:
instance:
hostname: ${CONFIG_HOST:localhost}
preferIpAddress: false
client:
registerWithEureka: ${REGISTER_WITH_DISCOVERY:true}
fetchRegistry: false
serviceUrl:
defaultZone: http://${DISCOVERY_HOST:localhost}:${DISCOVERY_PORT:8761}/eureka/
Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native xxx
Config Directory:
/Users/<some path>/config
--- movie-service.yml
Content of movie-service.yml:
themoviedb:
url: http://api.themoviedb.org
Figured this out myself. The Docker container doesn't have access to the host file system unless the file system directory is mounted at runtime using a -v flag.
For the sake of completeness, this is the full working Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native \
-v "/Users/<some path>/config/:/Users/<some path>/config/" \
xxx