I'm totally stumped on how to approach this.
I have two running containers on amazon lightsails container service. But I have no idea on how to access them with SSH.
Is this even possible? I need the commandline to check some stuff in the running container.
On the container I want to access I have an open port 22.
Related
B"H
I have a docker container on EC2 attempting to connect to DocumentDB. DocuementDB needs to be within the vpc network.
When attempting to connect to DocumentDB in a none host mode the connection fails, but when I (hack and) mount the container to use host network mode it does work. But for simple deployments and replicating my containers it's a problem.
Any idea how to connect to DocumentDB (without ssh tunneling) from within docker hosted on EC2?
If I understand correctly, you are running the container in none networking mode. None means you want to disable all the networking for your container. Most frequent used modes are either bridge or host.
You can also refer the below post which talks how to run docker container in ECS and connect to documentdb.
https://aws.amazon.com/blogs/database/deploy-a-containerized-application-with-amazon-ecs-and-connect-to-amazon-documentdb-securely/
Does exist any way to do this:
run one service (container) with main application - server (flask application);
server allows to run another services, them are also flask applications;
but I want to run each new service in separate container ?
For example, I have endpoint /services/{id}/run at the server, each id is some service id. Docker image is the same for all services, each service is running on separate port.
I would like something like this:
request to server - <host>//services/<id>/run -> application at server make some magic command/send message to somewhere -> service with id starts in new container.
I know that at least locally I can use docker-in-docker or simply mount docker socket in container and work with docker inside this container. But I would like to find way to work across multiple machines (each service can run on another machine).
For Kubernetes: I know how create and run pods and deployments, but I can't find how to run new container on command from another container. Can I somehow communicate with k8s from container to run new container?
Generally:
can I run new container from another without docker-in-docker and mounting docker socket;
can I do it with/without Kubernetes?.
Thanks for advance.
I've compiled all of the links that were in the comments under the question. I would advise taking a look into them:
Docker:
StackOverflow control Docker from another container.
The link explaining the security considerations is not working but I've managed to get it with the Webarchive: Don't expose the Docker socket (not even to a container)
Exposing dockerd API
Docker Engine Security
Kubernetes:
Access Clusters Using the Kubernetes API
Kubeflow in the spite of machine learning deployments
I have several docker containers running on my local machine. One with SQL Server, one with RabbitMQ, and one with my code. Everything works fine within the docker containers but how can I reference these containers from outside?
I would like to connect to SQL Server with the management studio installed on my desktop. I would also like to hit the RabbitMQ management console from the browser on my desktop.
Inside the container I reference the other containers with a hostname but this is not visible outside of the network. I can connect with the IP but that changes each time I start it.
Is there a way to give each container a hostname that is visible from the host? Is there a better approach?
I have created a docker container which is running on a particular VM in azure (or consider any cloud).That container has a java/nodejs/Csharp application running which needs to access Jenkins server which is running in a company network.
So will i be able to access jenkins from that docker container?If no,please provide a solution on how to access.
You can use --network=host option to let your container run in the same network context as the server you're trying to connect to if it's accessible from the container host.
Of course you should specify a specific network or routes if possible.
https://docs.docker.com/engine/reference/run/#network-settings
I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?
If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.