Guacamole starting new docker container (running a script) when user connects - guacamole

I'm developing a training lab with guacamole as the interface where users connect to their instances. Right now, it's configured to connect to a docker container when a user logs in, but the container must be already running for that to occur.
Is there a way to spin up a new docker container when a user logs in? I'm thinking that executing a script would work, as long as it's blocking

Related

Docker Container Connection Issue After Exiting and Restarting

I'm using Docker Desktop for Mac to run a Docker container running FileMaker Server 19 on Ubuntu Server. When I start up Docker Engine from scratch, i.e., no daemons running, then start the container, all works as expected. I can open FileMaker's admin console in a browser and I can open the hosted database with FileMaker Pro client app.
But if I stop the container from running and quit Docker Desktop and try to run the container again it starts up but I can't establish connections to it either with the FileMaker Pro client or a browser. The solution I've found is to quit the Docker processes that continue to run in the background and make the Docker engine restart from scratch. This obviously isn't desirable and it indicates to me that something isn't configured correctly in the network connection to the container.
I'm new to Docker, so apologies in advance if I'm missing something very basic. I searched for an solution online but can't find one.

Enabling Kubernetes on Docker Desktop breaks access to external service

I'm using docker desktop for mac.
I have built a docker image for a Node.js app that connects to an external MongoDB database via URI (the db is running on an AWS instance that I'm connected to over vpn). This works fine - I run the container and the app can connect to the database. Happy days.
Then...
I enable Kubernetes on docker desktop. I apply a deployment.yml to run the container but this deployment fails when trying to connect to the db. From my app's logs (I'm using mongoose):
MongooseServerSelectionError: connect EHOSTUNREACH [MY DB IP] +30005ms
Interestingly...
I can now no longer connect to the db by running my docker container either. I get the same error.
I have to disable kubernetes, restart docker desktop (twice), prune my previous container and network, and re-run my container. Then it will work again.
As soon as I enable kubernetes again, the db becomes unreachable again.
Any ideas why this is and/or how to fix it?
So the issue for us turned out to be an IP range clash. Exactly the same as described in this SO question:
Change Kubernetes docker-for-desktop cluster network ip
Unfortunately, like this user, we haven't been able to find a solution

Trying to get Xdebug session initiated in a docker inside a VM to reach my remote computer

I have a docker running my PHP app.
This docker needs to run inside a VM in a remote datacenter.
I work from a computer that can connect to the mentioned VM.
My intention is to have the Xdebug session that is initiated inside the docker reach my computer (more precisely my PHPStorm).
Both docker and the VM are running Centos (company approved/installed images).
The development computer is OSx.
I am able to use ssh remote forward (aka: tunnel) to forward any requests from the VM to my computer.
I want to either:
- be able to open a tunnel from my computer directly to the docker container in the VM
- or be able to continue the current tunnel in from the VM to the docker.
Have found no way to do the first option and have ran into a lot of issues trying to do the second.
Any suggestions?

ServiceFabric pass docker interactive mode argument on startup

I need to know how to pass the docker interactive mode argument when starting a container hosted in ServiceFabric cluster. This how we do it in docker command line:
docker run -it imagename
How do we tell ServiceFabric to start as interactive container.
You can't. By default, a container will be launched by a system account (likely NetworkService), without user profile, on a 'random' server inside a cluster of machines, that has no logged on users.
What are you trying to accomplish? Maybe there's another way to solve interaction requirements, by running a Web Server like IIS or NodeJS inside the container. Then you can interact with containerized processes.

Docker Container unable to Connect to ip on Host Machines Network

I have a system of three small Spring Boot apps, each to serve a different purpose, which contain REST endpoints. The three apps are meant to all work off of the same database (MariaDB). Right now, the system works as four separate dockers. Three docker containers for the three apps, and a fourth container for the MariaDB (based on MariaDB Docker image). All three app containers connect to the database container using the --link network pattern.
Each of the app dockers were launched from the same image, using:
docker run -i -t -p 8080:8080 --link mariadb:mariadb javaimage /bin/bash
This docker system currently works as expected. All three apps can call the MariaDB, each app is accessible from the host machine via REST calls by calling http://localhost:8080/pathToEndpoint. The project has recently expanded and a new requirement has been added. We are using Netflix Eureka as a service lookup point, which should in the future help allow these Docker's to be deployed anywhere with minimal changes needed to the software calling the Docker's. Netflix Eureka requires me to effectively "check in" when the app is launched. This is all handled by Spring Boot itself, so when the app is launched this "check in" is apart of the startup process. The Eureka server is on the same network as the host machine, and for the time being is being accessed via an ip address. If the Spring Boot applications running this Eureka check-in component are launched directly on the host machine, everything works as expected. The app makes a successful call to the Eureka server and notify it of the app's existence. If I run the same app within a Docker container on the same host machine, this fails as the connection is refused. Upon investigation I found I could not even ping the ip address of the Eureka server from within the Docker container, explaining why it is failing. I continued a little further into testing what does and does not work, and found I can ping external sites such as google without a problem, but any internal to my network servers are un-reachable when I try to ping them from within the Docker container.
Therefore my question is, what network configuration I am missing to cause this? I recognize that Docker has quite a lot of network configuration options, but I have not been able to find someone with a similar issue.
Any help is appreciated. Thank you!

Resources