I need to know how to pass the docker interactive mode argument when starting a container hosted in ServiceFabric cluster. This how we do it in docker command line:
docker run -it imagename
How do we tell ServiceFabric to start as interactive container.
You can't. By default, a container will be launched by a system account (likely NetworkService), without user profile, on a 'random' server inside a cluster of machines, that has no logged on users.
What are you trying to accomplish? Maybe there's another way to solve interaction requirements, by running a Web Server like IIS or NodeJS inside the container. Then you can interact with containerized processes.
Related
Motivation
Running DDEV for a diverse team of developers (front-end / back-end) on various operating systems (Windows, MacOS and Linux) can become time-consuming, even frustrating at times.
Hoping to simplify the initial setup, I started working on an automated VS Code Remote Container setup.
I want to run DDEV in a VS Code Remote Container.
To complicate things, the container should reside on a remote host.
This is the current state of the setup: caillou/vs-code-ddev-remote-container#9ea3066
Steps Taken
I took the following steps:
Set up VS Code to talk to a remote Docker installation over ssh. You just need to add the following to VS Code's settings.json: "docker.host": "ssh://username#host".
Install Docker and create a user with UID 1000 on said host.
Add docker-cli, docker-compose, and and ddev to the Dockerfile, c.f. Dockerfile#L18-L20.
Mount the Docker socket in the container and use the remote user with UID 1000. In the example, this user is called node: devcontainer.json
What Works
Once I launch the VS Code Remote Container extension, an image is build using the Dockerfile, and a container is run using the parameters defined in the devcontainer.json.
I can open a terminal window and run sudo docker ps. This lists the container I am in, and its siblings.
My Problem
DDEV needs to create docker containers.
DDEV can not be run as root.
On the host, the user with UID 1000 has the privilege to run Docker.
Within the container, the user with UID 1000 does not have the privilege to run Docker.
The Question
Is there a way to give an unprivileged user access to Docker within Docker?
Does exist any way to do this:
run one service (container) with main application - server (flask application);
server allows to run another services, them are also flask applications;
but I want to run each new service in separate container ?
For example, I have endpoint /services/{id}/run at the server, each id is some service id. Docker image is the same for all services, each service is running on separate port.
I would like something like this:
request to server - <host>//services/<id>/run -> application at server make some magic command/send message to somewhere -> service with id starts in new container.
I know that at least locally I can use docker-in-docker or simply mount docker socket in container and work with docker inside this container. But I would like to find way to work across multiple machines (each service can run on another machine).
For Kubernetes: I know how create and run pods and deployments, but I can't find how to run new container on command from another container. Can I somehow communicate with k8s from container to run new container?
Generally:
can I run new container from another without docker-in-docker and mounting docker socket;
can I do it with/without Kubernetes?.
Thanks for advance.
I've compiled all of the links that were in the comments under the question. I would advise taking a look into them:
Docker:
StackOverflow control Docker from another container.
The link explaining the security considerations is not working but I've managed to get it with the Webarchive: Don't expose the Docker socket (not even to a container)
Exposing dockerd API
Docker Engine Security
Kubernetes:
Access Clusters Using the Kubernetes API
Kubeflow in the spite of machine learning deployments
I have a Jenkins that is running inside of a docker container. Outside of the Docker container in the host, I have a bash script that I would like to run from a Jenkins pipeline inside of the container and get the result of the bash script.
You can't do that. One of the major benefits of containers (and also of virtualization systems) is that processes running in containers can't make arbitrary changes or run arbitrary commands on the host.
If managing the host in some form is a major goal of your task, then you need to run it directly on the host, not in an isolation system designed to prevent you from doing this.
(There are ways to cause side effects like this to happen: if you have an ssh daemon on the host, your containerized process could launch a remote command via ssh; or you could package whatever command in a service triggered by a network request; but these are basically the same approaches you'd use to make your host system manageable by "something else", and triggering it from a local Docker container isn't different from triggering it from a different host.)
I have a Server which has RHEL OS. I am creating a docker container inside the Server with the RHEL image as well.
My goal is to login to the docker container with a separate IP address as it was a VM.
So if the IP of the Server is 192.168.1.10 and the IP of the container inside the server is 192.168.1.15, I want to log in to both 192.168.1.10 and 192.168.1.15 as it was a separate VM. How can I achieve that?
Thanks for your help in advance.
Short answer: you’ll need to start the container running sshd. It could be as simple as adding /usr/sbin/sshd to the run command.
Longer answer: this is not really the way docker is supposed to be used. What you probably really want is a fully functional system, with sshd started via systemd. This is a multi process fat container, and generally not considered a best practice.
Options are
Use docker exec command
Use docker attach command
Start/setup sshd inside the container itself [not recommended though].
Below link details this process nicely:
https://phoenixnap.com/kb/how-to-ssh-into-docker-container
Note: This isn't my link. I found it via browsing internet.
I have created a docker container which is running on a particular VM in azure (or consider any cloud).That container has a java/nodejs/Csharp application running which needs to access Jenkins server which is running in a company network.
So will i be able to access jenkins from that docker container?If no,please provide a solution on how to access.
You can use --network=host option to let your container run in the same network context as the server you're trying to connect to if it's accessible from the container host.
Of course you should specify a specific network or routes if possible.
https://docs.docker.com/engine/reference/run/#network-settings