May be a stupid question, but I was told to wrap a large distribution of CLI apps in one container and then build another container that can call them (via Java process builder API, Python execv API, etc.) Is that possible?
Related
I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/
I'm trying to create a general docker development environment, but having some issues with it. My goal is to have only one container that I can use for any related project that I have.
For example, instead of installing python in my computer I'd like to use a container with python installed. The idea is that all of my python projects would communicate with this container to access the python environment.
In the future I want to have a docker-compose with containers for python, node.js, flutter, and so on... A single docker compose with all of the development environment I need.
Thing is, I'm not able to create a python environment that's accessible to all my projects. I've tried to use Remote Containers in vscode, so I don't have to install python locally, and it works, but in parts. I have to create a Dockerfile for the project and when it runs it creates a container for it.
I can't access that container with other projects, and if I try to do the same process in other project it will create another container. Each project needs to have their own Dockerfile and a container running python.
Is there a way to create one single container where all projects can access and use python? Is this idea even viable?
I have a docker stack deployed with 20+ services which comprise my application. I would like to know that is there a way to update this stack with the latest changes to the software from within one of the containers running as a part of the stack?
Approach i have tried:
In one of the containers for a service, mounted the docker socket and the /usr/bin/docker file and downloaded the latest compose file from the server.
Instantiated a script which downloads the latest images
Initiate a docker stack deploy with the new compose file
Everything works fine this way but if the service which is running this process itself has an update and if that docker stack deploy tries to create this service before any other service in the stack, then the stack update fails.
Any suggestion or alternative approaches for this?
There is no out of the box solution for docker swarm mode (something like watchtower for single docker). I think you already found the best solution for doing this automatically. I would suggest you put the update container (the one that is updating the services) on a ignore list. Then on one of your master nodes, create a cron that updates that one container. I know this is not a prefect solution, but it should work.
The standard way to do this is to build a new Docker image that contains your new application code. Tag it (as in the docker build -t argument) with some unique version, like a source control tag or date stamp. Start a new container with the new application code, then stop and delete the old container.
As a general rule you do not upgrade the software inside a running container. Delete the old container and start a new container with the software and version you want. Also, this is generally managed by an operator, a continuous deployment system, or an orchestration system, not by the container itself. (Mounting the Docker socket into a container is a significant security exposure.)
(Imagine setting up a second copy of your cluster that works exactly the same way as your production cluster, except that it has the software you want to deploy tomorrow. You don't want your production cluster picking that up on its own until you've tested it. This scheme should give you a reproducible deployment setup so that it's easy to start that pre-production cluster, but also give you control over which specific versions are running where.)
From this link I found that Google Cloud Dataflow uses Docker containers for its workers: Image for Google Cloud Dataflow instances
I see it's possible to find out the image name of the docker container.
But, is there a way I can get this docker container (ie from which repository do I go to get it?), modify it, and then indicate my Dataflow job to use this new docker container?
The reason I ask is that we need to install various C++ and Fortran and other library code on our dockers so that the Dataflow jobs can call them, but these installations are very time consuming so we don't want to use the "resource" property option in df.
Update for May 2020
Custom containers are only supported within the Beam portability framework.
Pipelines launched within portability framework currently must pass --experiments=beam_fn_api explicitly (user-provided flag) or implicitly (for example, all Python streaming pipelines pass that).
See the documentation here: https://cloud.google.com/dataflow/docs/guides/using-custom-containers?hl=en#docker
There will be more Dataflow-specific documentation once custom containers are fully supported by Dataflow runner. For support of custom containers in other Beam runners, see: http://beam.apache.org/documentation/runtime/environments.
The docker containers used for the Dataflow workers are currently private, and can't be modified or customized.
In fact, they are served from a private docker repository, so I don't think you're able to install them on your machine.
Update Jan 2021: Custom containers are now supported in Dataflow.
https://cloud.google.com/dataflow/docs/guides/using-custom-containers?hl=en#docker
you can generate a template from your job (see https://cloud.google.com/dataflow/docs/templates/creating-templates for details), then inspect the template file to find the workerHarnessContainerImage used
I just created one for a job using the Python SDK and the image used in there is dataflow.gcr.io/v1beta3/python:2.0.0
Alternatively, you can run a job, then ssh into one of the instances and use docker ps to see all running docker containers. Use docker inspect [container_id] to see more details about volumes bound to the container etc.
I'm trying to develop an application that has two main containers, a Java-Tomcat webserver and a Python and Lua one for machine learning scripts.
Soo here is the issue: I need to send a command on the Python/Lua container's CLI whenever the Java one receives a certain Request. I know that if the webserver wasn't a container I could simply use docker exec, but wouldn't having the Java part of my application as a non-container break the whole security idea of dockers?
Thanks a lot and sorry for my poor english!
(+1 for #larsks) Set up a REST API that allows one container to trigger actions on the other container.
You can setup Container communication across links. Docs here https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
After that you can call from container A to B using B:port/<your API>