Move an LXC Container from one host to another through an API - lxc

I'm new to LXC containers and i'm aware of how to move a container from one host to another using the cli move code, but how would it be possible to do this using the pylxd api or any other apis, i need to be able to do this for my upcoming project, thanks.

Related

Is there a way to restrict container access for user in Docker?

I'm using a Docker machine to host some process such as game servers with friends. I want to give access to CLI of some containers for some users (and only for that), so that they cannot interfer eachothers. I don't want those to be roots on this server.
A solution might be to create users, allow them to perform certains commands in adequation with their containers (such as sudo docker attach ) and let them do everything. I would like to know if there is another, better and proper way to do it so.
Do you have done any experimentation of this kind ?
Thanks.
Why not creating a separate Docker container for each service?
So in other words, You create one container for each friend, and on each container you create a user which is a member in sudo (so your friend will be the root of his own container). Then install openssh-server, and give each of your friends the SSH access data (his username and password) to his container. This way they can do what ever they want, without affecting each other.
Of course do not forget to forward the SSH connection (and any other connections of the services you want) from the host to the container.
Sadly as standing there
Why doesn't Docker support multi-tenancy?
docker doesn't support multi-tenancy so you cannot isolate users from each other.
Edit: one of the possible solutions is to create SSH server in every container and then let your users to connect directly via SSH to container instead of using host machine.

Deploying web apps(java Microservices) using docker vs deploying web apps in multiple ports in same VM?

With respect to java Microservices deployment since we use same kind of configurations to all the apps(micro services) does it make any difference using dockers, rather than deploying on multiple ports?Because at the end apps will be down if VM is down.
Docker is brilliantly suitable for microservice application deployment. You put each service into separate container and use docker-compose (docker-swarm, k8n or whatever) to launch your containers and link them into one isolated network (done automatically).
In such configuration you dont use port, but use hostnames. That means each container will have its own name inside network, and all requests are being done using that name. That is much more convenient comparing to use of different tcp ports.
Putting applications inside containers are becoming the defacto standard for deployment. Docker helps in creating container images which can be used to deploy inside your cluster like kubernetes cluster.

Self updating docker stack

I have a docker stack deployed with 20+ services which comprise my application. I would like to know that is there a way to update this stack with the latest changes to the software from within one of the containers running as a part of the stack?
Approach i have tried:
In one of the containers for a service, mounted the docker socket and the /usr/bin/docker file and downloaded the latest compose file from the server.
Instantiated a script which downloads the latest images
Initiate a docker stack deploy with the new compose file
Everything works fine this way but if the service which is running this process itself has an update and if that docker stack deploy tries to create this service before any other service in the stack, then the stack update fails.
Any suggestion or alternative approaches for this?
There is no out of the box solution for docker swarm mode (something like watchtower for single docker). I think you already found the best solution for doing this automatically. I would suggest you put the update container (the one that is updating the services) on a ignore list. Then on one of your master nodes, create a cron that updates that one container. I know this is not a prefect solution, but it should work.
The standard way to do this is to build a new Docker image that contains your new application code. Tag it (as in the docker build -t argument) with some unique version, like a source control tag or date stamp. Start a new container with the new application code, then stop and delete the old container.
As a general rule you do not upgrade the software inside a running container. Delete the old container and start a new container with the software and version you want. Also, this is generally managed by an operator, a continuous deployment system, or an orchestration system, not by the container itself. (Mounting the Docker socket into a container is a significant security exposure.)
(Imagine setting up a second copy of your cluster that works exactly the same way as your production cluster, except that it has the software you want to deploy tomorrow. You don't want your production cluster picking that up on its own until you've tested it. This scheme should give you a reproducible deployment setup so that it's easy to start that pre-production cluster, but also give you control over which specific versions are running where.)

Accessing CLI apps from one docker container in another container

May be a stupid question, but I was told to wrap a large distribution of CLI apps in one container and then build another container that can call them (via Java process builder API, Python execv API, etc.) Is that possible?

Run a command on a container from inside another one

I'm trying to develop an application that has two main containers, a Java-Tomcat webserver and a Python and Lua one for machine learning scripts.
Soo here is the issue: I need to send a command on the Python/Lua container's CLI whenever the Java one receives a certain Request. I know that if the webserver wasn't a container I could simply use docker exec, but wouldn't having the Java part of my application as a non-container break the whole security idea of dockers?
Thanks a lot and sorry for my poor english!
(+1 for #larsks) Set up a REST API that allows one container to trigger actions on the other container.
You can setup Container communication across links. Docs here https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
After that you can call from container A to B using B:port/<your API>

Resources