I have an application that requires PyTorch library to function, which I've installed on my computer. However, I want to deploy my application and use PyTorch inside a container. To achieve this, I plan to use a pre-built PyTorch Docker image and link it to my container in your Docker Compose file.
In my Docker Compose file, I have included the PyTorch Docker image as a separate service that my application container will depend on. I have also included an environment variable specifying the path where PyTorch is installed inside the PyTorch container. Finally, I have mounted the source code and data directories as volumes in the application container, so that the application code can access them :
version: '3'
services:
myapp:
build: .
image: myapp:latest
environment:
- PYTHONPATH=/usr/local/lib/python3.9/site-packages
volumes:
- ./src:/app/src
- ./data:/app/data
ports:
- "8000:8000"
depends_on:
- pytorch
pytorch:
image: pytorch/pytorch:latest
I am getting this error:
ModuleNotFoundError: No module named 'torch'
As I understand "depends on" will not make my app use the PyTorch docker image.
Important: I don't want to make a docker container or image on top of PyTorch docker. I need to make my docker container use PyTorch image
Related
I have an app developped in .Net Core et i use Docker to Deploy it in Linux VPS.
In The app, i have a feature that consists on uploading files and i store them in wwwroot. I have used docker volumes to externalize the folder.
But everytime i did a build i loose all the files that users uploaded. Which is normal..
Update: This is how i'm declaring the volume
app:
image: app
depends_on:
- "postgres_image"
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- app_wwwroot:/wwwroot
My question is what is the best approach to be able to make changes on the app (build source code and get a new release) without loosing the uploaded files.
Thanks.
It would've been better if you provided how you are using docker volumes to persist the wwwroot data.
If you want to persist your data you can either use bind mounts or volumes within the docker run command or the docker-compose.
I usually use bind mounts instead of volumes when I want to persist data:
docker run -v './path/on/your/machine:/path/inside/the/container' image_name
or In docker compose
version: '3.8'
services:
app:
image: image_name
volumes:
- './path/on/your/machine:/path/inside/the/container'
as you can see you will be mounting ./path/on/your/machine from your host machine into /path/inside/the/container which is the data of wwwroot in your case.
Any changes made on each one of the dir/file mapped inside the container or your host machine will affect both.
Building again wouldn't affect the dir/file
I have setup a docker-compose project which are creating multiple images:
cache_server:
image: current_timezone/full-supervisord-cache-server:1.00
container_name: renamed-varnish-cache
networks:
- network_frontend
build:
context: "./all-services/"
dockerfile: "./cache-server/Dockerfile.cacheserver.varnish"
args:
- DOCKER_CONTAINER_USERNAME=username
ports:
- "6081:6081"
- "6082:6082"
When I use docker-compose up -f file1.yml file2.override.yml I will then get the containers: in the case of above one it will be named : renamed-varnish-cache
In the corresponding Dockerfile (./nginx-proxy/Dockerfile.proxy.nginx) I want to be able use the container_name property defined in the docker-compose.yml shown above.
When the containers are created I want to update the Varnish configurations inline inside Dockerfile : RUN sed -i "s|webserver_container_name|renamed-varnish-cache|g" /etc/varnish/default.vcl"
For instance:
backend webserver_container_name{
.host = "webserver_container_name";
.port = "8080";
}
To: I anticipate I will have to replace the - with _ for the backend:
backend renamed_varnish_cache{
.host = "renamed-varnish-cache";
.port = "8080";
}
Is there a way to receive the docker-compose named items as variables inside Dockerfile?
In core Docker, there are two separate concepts. An image is a built version of some piece of software packaged together with its dependencies; a container is a running instance of an image. There are separate docker build and docker run commands to build images and launch containers, and you can launch multiple containers from a single image.
Docker Compose wraps these concepts. In particular, the build: block corresponds to the image-build step, and that is what invokes the Dockerfile. None of the other Compose options are available or visible inside the Dockerfile. You cannot access the container_name: or environment: variables or volumes: because those don't exist at this point in the build lifecycle; you also cannot contact other Compose services from inside the Dockerfile.
It's pretty common to have multiple containers run off the same image if they have largely the same code base but need a different top-level command. One example is a Python Django application that needs Celery background workers; you'd have the same project structure but a different command for the Celery worker.
version: '3.8'
services:
web:
build: .
image: my/django-app
worker:
image: my/django-app
command: celery worker ...
Now with this stack you can docker-compose build to build the one image, and then run docker-compose up to launch both containers from that image. (During the build you can't know what the container names will be, and there will be two container names so you can't just use one in the Dockerfile.)
At a design level, this means that you often can't include configuration-type settings in the image itself (other containers' hostnames, user IDs for host-shared filesystems). If your application lets you specify these things as environment variables, that's the easiest option. You can use bind mounts (volumes:) to inject whole config files. If neither of these things work for you, you can use an entrypoint script to rewrite the config file.
I’m using TensorFlow docker images for the first time. Before I get going with big time investments, I want to make sure I understand where files should be. Should I store, run, create, save all files inside the container and remove what I want to later? Should any files remain on the host?
Edit the files always outside the container. I recommend you Docker Compose to setup your Docker environment. Here's an example:
# Use version 2.3 of Docker Compose to access the GPU with NVIDIA-Docker
# (it's the only version that supports GPUs
version: '2.3'
services:
ai_container:
image: ai_container
container_name: ai_container
working_dir: /ai_container/scripts
build:
context: .
dockerfile: Dockerfile
# You may want to expose the port 6006 to use Tensorboard
ports:
- "6006:6006"
# Mount your scripts,logs,results and datasets (with read-only)
volumes:
- ./scripts:/ai_container/scripts
- ./logs:/ai_container/logs
- ./results:/ai_container/results
- /hdd/my_heavy_dataset_folder/:/datasets:ro
# Depending on the task you may need extra memory
shm_size: '8gb'
# This enables the GPU (requires NVIDIA-Docker)
runtime: nvidia
# Start Tensorboard to keep the container alive
command: tensorboard --host 0.0.0.0 --logdir /ai_container/logs
this is my second day working with Docker, can you help me with a solution for this typical case:
Currently, our application is a combination of Java Netty server, Tomcat, python flask, MariaDB.
Now we want to use Docker to make the deployment more easily.
My first idea is to create 1 Docker Image for environment (CentOS + Java 8 + Python 3), another image for MariaDB, and 1 Image for application.
So the docker-compose.yml should be like this
version: '2'
services:
centos7:
build:
context: ./
dockerfile: centos7_env
image:centos7_env
container_name: centos7_env
tty: true
mariadb:
image: mariadb/server:10.3
container_name: mariadb10.3
ports:
- "3306:3306"
tty: true
app:
build:
context: ./
dockerfile: app_docker
image: app:1.0
container_name: app1.0
depends_on:
- centos7
- mariadb
ports:
- "8081:8080"
volumes:
- /home/app:/home/app
tty: true
The app_dockerfile will be like this:
FROM centos7_env
WORKDIR /home/app
COPY docker_entrypoint.sh ./docker_entrypoint.sh
ENTRYPOINT ["docker_entrypoint.sh"]
In the docker_entrypoint.sh there should couple of commands like:
#!/bin/bash
sh /home/app/server/Server.sh start
sh /home/app/web/Web.sh start
python /home/app/analyze/server.py
I have some questions:
1- Is this design good, any better idea for this?
2- Should we separate image for database like this? Or we could install database on OS image, then do commit?
3- If run docker-compose up, will docker create 2 containers for OS image and app image which based on OS image?, is there anyway to just create container for app (which run on Centos already)?
4- If the app dockerfile not base on OS image, but use FROM SCRATCH, so can it run as expected?
Sorry for long question, Thank you all in advance!!!
One thing to understand is that Docker container is not a VM - they are much more lightweight, so you can run many containers on a single machine.
What I usually do is run each service in its own container. This allows me to package only stuff related to that particular service and update each container individually when needed.
With your example I would run the following containers:
MariaDB
Container running /home/app/server/Server.sh start
Container running /home/app/web/Web.sh start
Python container running python /home/app/analyze/server.py
You don't really need to run centos7 container - this is just a base image which you used to build another container on top of it. Though you would have to build it manually first, so that you can build other image from it - I guess this is what you are trying to achieve here, but it makes docker-compose.yml a bit confusing.
There's really no need to create a huge base container which contains everything. A better practice in my opinion is to use more specialized containers. For example in you case for Python you could have a container which container Python only, for Java - your preferred JDK.
My personal preference is Alpine-based images and you can find many official images based on it: python:<version>-alpine, node:<verion>-alpine, openjdk:<version>-alpine (though I'm not quite sure about all versions), postgres:<version>-alpine and etc.
Hope this helps. Let me know if you have other questions and I will try to address them here.
I have image A (some_laravel_project) and B (laravel_module). Image A is a Laravel project that looks like this.
app
modules
core
Volume Image b here
config
As the list above suggests I want to share a volume from Image B in Image A using docker-compose. I want to access the files in container B.
This is the docker-compose I tried and didn't receive any errors creating those images in gitlab ci. I checked and the volume and its files are in stored in the module_user:latest container.
I think I made a mistake mounting the volume to some_laravel_project.
version: '3'
services:
laravel:
image: some_laravel_project
working_dir: /var/www
volumes:
- /var/www/storage
- userdata:/var/www/Modules
user:
image: laravel_module
volumes:
- userdata:/user
volumes:
userdata:
webroot:
The method you used to share volumes across container in docker compose is the correct one. You can find this documented under docker-compose volumes
if you want to reuse a volume across multiple services, then define a
named volume in the top-level volumes key. Use named volumes with
services,
In you case, the directory /var/www/Modules in laravel will have the same content as that in /user inside user service. You can verify that by going into the containers and checking each directoty by running;
docker exec -it <container-name> bash