How to start triton server after building the tritonserver Image for custom windows server 2019? - nvidia

Building the windows-based triton server image.
Building the Dockerfile.win10.min for triton server version 22.11 was not working as base image required for building the server image was not available for downloading.
To build the image downgraded the triton server version to 22.10. Also needed to download the appropriate CUDNN and TENSORT version for building the image.
Successfully built the base image using below command for 22.10 server version.
docker build -t win10-py3-min -f Dockerfile.win10.min .
Once base image is built. To build the triton server image used below command with appropriate container tag and required backend.
python build.py --cmake-dir=<path/to/repo>/build --build-dir=/tmp/citritonbuild --no-container-pull --image=base,win10-py3-min --enable-logging --enable-stats --enable-tracing --enable-gpu --endpoint=grpc --endpoint=http --repo-tag=common:<container tag> --repo-tag=core:<container tag> --repo-tag=backend:<container tag> --repo-tag=thirdparty:<container tag> --backend=ensemble --backend=tensorrt:<container tag> --backend=onnxruntime:<container tag> --backend=openvino:<container tag>
Tried different arguments with build.py command to build the triton server image was unsuccessful. As there were certain issues related to CMAKE, Rapid Json.
Certain issues on github suggested to try with the latest stable release which is 22.12. For building the base image it required the same os as per 22.11. So, to build the base image used the Dockerfile.win10.min file from 22.10 version which worked earlier.
After passing different arguments to build.py file was able to build the tritonserver image. However, initially built images did not have the tritonserver.exe which is required for starting the server. In the end was able to build the tritonserver which had the required tritonserver.exe.
built the image using below command.
python build.py -v --no-container-pull --image=base,win10-py3-min --enable-logging --enable-stats --enable-tracing --enable-gpu --endpoint=grpc --endpoint=http --repo-tag=common:r22.12 --repo-tag=core:r22.12 --repo-tag=backend:r22.12 --repo-tag=thirdparty:r22.12 --backend=ensemble
To start the triton server, need to mount the local model repository to the docker image. Command to start the server is as below.
docker run -it -v C:/Users/Desktop/model_repository:C:/opt/tritonserver/models tritonserver:latest bin/tritonserver.exe --model-repository=C/opt/tritonserver/models
While starting the triton server getting the below error.
failed to resize tty, using default size

Related

Issue with docker-compose support while installing VECTR

I am trying to install VECTR on the GCP Ubuntu instance and following the official writeup for the same.
I used apt-get to install requirements (docker-ce, docker-ce-cli, containerd.io, docker-compose, unzip) on Ubuntu (GCP).
But while trying to run docker-compose up -d , I am getting issues with the docker-compose version.
ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a version of "2" (or "2.0") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1.
Changed docker-compose.yml file and padded version to 2.
But now getting a different issue:
ERROR: Invalid interpolation format for "ports" option in service "tomcat": "${VECTR_PORT:-8081}:8443"
The docker-compose distro has been deprecated. Instead, you should install docker-compose-plugin. This is going to be more important over time because the versions are getting very far apart (currently 1.25.X vs 2.6.X). Instead of executing docker-compose up, you will now execute the command:
docker compose up

DITA-OT with docker and bootstrap plugin ... how do i get access to the plugin directory?

I'm pretty new to Docker and also to Dita. I would like to run dita in a docker container - I installed and set up everything (under Windows) according to the instructions #
Installing plug-ins in a Docker image
I also need the bootstrap plugin - so, my simple dockerfile looks like:
FROM docker.pkg.github.com/dita-ot/dita-ot/dita-ot:3.4
RUN dita --install https://github.com/infotexture/dita-bootstrap/archive/3.3.zip
Then i built the image and created the container:
docker image build -t dita_test:1.0 .
docker container run -it -v /c/Admin/DITA:/src dita_test:1.0 -i /src/my.ditamap -o /src/out/dita-bootstrap -f html5-bootstrap -v
The output is generated without errors and everything looks good .... but, which I don't understood:
how can I pass arguments to the bootstrap plugin? (e.g. --args.css site.css)
how can I make the bootstrap directory available outside of the container ? (want to extend the bootstrap.hdf.xml file ...)
I found older documentation where the opt/dita-ot/DITA-OT directory was mounted. But it doesn't work or confuses me.
Help would be great ... thanks!
-
Might this version 3.4 documentation for Running the dita command from a Docker image help? (3.4 being the most current version of the DITA-OT at this time...)

How to run Bazel container images on OSX?

According to the documentation at bazelbuild/rules_docker, it should be possible to work with these container images on OSX, and it also claims that it's possible to do so without docker.
These rules do not require / use Docker for pulling, building, or pushing images. This means:
They can be used to develop Docker containers on Windows / OSX without boot2docker or docker-machine installed.
They do not require root access on your workstation.
How do I do that? Here's a simple rule:
go_image(
name = "helloworld_image",
importpath = "github.com/nictuku/helloworld",
library = ":go_default_library",
visibility = ["//visibility:public"],
)
I can build the image with bazel build :helloworld_image. It produces a tar ball in blaze-bin, but it won't run it:
INFO: Running command line: bazel-bin/helloworld_image
Loaded image ID: sha256:08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852
Tagging 08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852 as bazel:helloworld_image
standard_init_linux.go:185: exec user process caused "exec format error"
ERROR: Non-zero return code '1' from command: Process exited with status 1.
It's trying to run the linux this is OSX, which is silly.
I also tried doing a "docker load" on the .tar content but it doesn't seem to like that format.
$ docker load -i bazel-bin/helloworld_image-layer.tar
open /var/lib/docker/tmp/docker-import-330829602/app/json: no such file or directory
Help? Thanks!
You are building for your host platform by default so you need to build for the container platform if you want to do that.
Since you are using a go binary, you can do cross compilation by specifying --cpu=k8 on the command line. Ideally we would be able to just say that the docker image needs a linux binary (so no need to specify the --cpu command-line flag) but this is still a work in progress in Bazel.

Running C/C++ binary executable as a docker container

I am new to container world and exploring options to run my application on a container.Here are the things that I am seeing:
When I include compiling and building the C/C++ binary as part of docker image itself, it works fine with out any problems. Container starts and everything works fine.
If I try to run an already compiled and existing binary using CMD ["./helloworld"] in a container It throws me this error
standard_init_linux.go:185: exec user process caused “exec format error”.
Any ideas of how to get out of this problem? This seems like a basic problem that would have been solved already
Here is my dockerfile:
FROM ubuntu
COPY . /Users/test//Documents/CPP-Projects/HelloWorld-Static
WORKDIR /Users/test/Documents/CPP-Projects/HelloWorld-Static
CMD ["./build/exe/hellostatic/hellostatic"]
Hers is my exe:
gobjdump -a build/exe/hellostatic/hellostatic
build/exe/hellostatic/hellostatic: file format mach-o-x86-64
build/exe/hellostatic/hellostatic
Here is the error:
docker run test
standard_init_linux.go:185: exec user process caused “exec format error”
The problem is that you are trying to run an incompatible binary format in your container...
You are running an Ubuntu-based container (FROM ubuntu) line, but you are trying to run a Mach-O binary. By default, Linux will not run mach-o binaries.
Build your binary for the target platform (Ubuntu/Linux) and it will work well. It appears that you are running Mac OS X, so you could install an Ubuntu VM to compile your binary and transfer it to be used by the container.
When you build it inside the container, it works because it will be built to the right platform.

Docker hub automated build fails but locally does not

I have setup an automated build on Docker hub here (the sources are here).
The build goes well locally. I have also tried to rebuild it with --no-cache option:
docker build --no-cache .
And the process completes successfully
Successfully built 68b34a5f493a
However, the automated build fails on Docker hub with this error log:
...
Cloning into 'nerdtree'...
[91mVim: Warning: Output is not to a terminal
[0m
[91mVim: Warning: Input is not from a terminal
[0m
[m[m[0m[H[2J[24;1HError detected while processing command line:
E492: Not an editor command: PluginInstall
E492: Not an editor command: GoInstallBinaries
[91mmv: cannot stat `/go/bin/*': No such file or directory
[0m
This build apparently fails on the following vim command:
vim +PluginInstall +GoInstallBinaries +qall
Note that the warnings Output is not to a terminal and Input is not to a terminal appears also in the local build.
I cannot understand how this can happen. I am using a standard Ubuntu 14.04 system.
I finally figured it out. The issue was related to this one.
I am using Docker 1.0 in my host machine, however a later version is in production in Docker Hub. Without the explicit ENV HOME=... line in the Dockerfile, version 1.0 uses / as home directory, while /root is used by the later version. The result is that vim was not able to find its .vimrc file, since it was copied in / instead of /root. The solution I used is to explicitly define ENV HOME=/root in my Dockerfile, so there are no differences between the two versions.

Resources