How to connect a Dev Container to another Container? - docker

for this question im working with prisma's dev container: https://github.com/prisma/prisma/tree/main/.devcontainer
once i open that repo inside of a container using the remote container plugin in visual studio and run some Jest Tests that rely on docker services defined in the https://github.com/prisma/prisma/tree/main/docker folder, i get the error of "cant connect to database" for all databases...
it's like if the dev container had no idea those services exist... on my pc, looking at docker desktop i see the services up and running but the devcontainer can't... why?
i find it weird that i had to change any type of setting since this files are from the prisma repo, they are suposed to be ready for action once downloaded... right?

Assumming the docker network driver is bridge (Default).
If the script is runing this line to get env in your devcontainer as below.
const connectionString = (
process.env.TEST_MYSQL_URI_MIGRATE || 'mysql://root:root#localhost:3306/tests-migrate'
).replace('tests-migrate', 'tests-migrate-dev')
`
The localhost in the connection string means the localhost in your devcontainer but not your host machine.
You should access the localhost of your host machine instead.
The fix is set the TEST_MYSQL_URI_MIGRATE environment variable instead like
TEST_MYSQL_URI_MIGRATE=mysql://root:root#host.docker.internal:3306/tests-migrate
For the details how to access the localhost of host machine, please read this question

Related

Deploy Docker services to a remote machine via ssh (using Docker Compose with DOCKER_HOST var)

I'm trying deploy some docker services from a compose file to a Vagrantbox. The Vagrantbox does not have a static IP. I'm using the DOCKER_HOST environment variable to set up the target engine.
This is the command I use: DOCKER_HOST="ssh://$BOX_USER#$BOX_IP" docker-compose up -d. The BOX_IP and BOX_USER vars contain the correct IP address and username (obtained at runtime from the Vagrantbox).
I can connect and deploy services this way, but I the SSH connection always asks if I wanna trust the machine. Since the VM gets a dynamic IP, my known_hosts file gets polluted with lines I only used once and might cause trouble some time in the future in case the IP is taken again.
Assigning a static IP results in error messages stating that the machine does not match my known_hosts entry.
And setting StrictHostKeyChecking=no also is not an option because this opens the door for a lot of security issues.
So my question is: how can I deploy containers to a remote Vagrantbox without the metioned issues? Ideally I can start a docker container handles the deployments. But I'm open to any other idea as well.
The reason why I don't just use a bash script while provisioning the VM is that this VM acts as a testing ground for a physical machine. The scripts I use are the same for the real machine. I test them regularly and automated inside a Vagrantbox.
UPDATE: I'm using Linux

VSCode combine remote ssh and remote containers

On my office desktop machine, I'm running a docker container that accesses the GPU. Now, since I'm working from home, I'm connected through ssh to my office desktop computer in vs code via remote ssh plugin which works really well. However, I would like to further connect via remote containers to that running container in order to be able to debug the code I'm running in that container. Failed to get this done yet.
Anyone has any idea if this is possible at all and in case any idea how to get this done?
Install and activate ssh server in a container.
Expose ssh port via docker
create user with home directory and password in container
(install remote ssh extension for vs code and) setup ssh connection within the remote extension in vs code and add config entry:
Host <host>-docker
Hostname your.host.name
User userIdContainer
Port exposedSshPortInContainer
Connect in vs code.
Note: Answer provided by OP on question section.

I want to connect a Docker Superset container to an existing external MySQL database

I am trying to add an existing MySQL database as a source database to a docker container running Apache Superset. The MySQL database that I am trying to add is not running in a docker container. It's an existing MySQL database running on a Windows machine.
I've added mysqlclient==1.4.6 to requirements.txt. The error message seems to indicate that the driver is installed.
I've used mysql://user:password#127.0.0.1:3306/database_name and mysql://user:password#localhost:3306/database_name
The error I get is:
"ERROR: Connection failed, please check your connection settings."
I am using image: apache / 'incubator-superset' v. 0.36.0
Are there any settings or config that needs to be changed to be able to communicate to an external database from within a running docker container?
So I figured it out. For Windows, run ipconfig (maybe ifconfig linux, mac) in terminal/powershell and check what ip address docker ethernet port is using (listed as WSL), let's say ip is: 172.x(x).x(x).x(x). Then configure connection string with ip address on docker ethernet port as follows: 'mysql://user:password#172.x(x).x(x).x(x):3306/database_name'.
Follow-up question if anybody knows: How can I connect my docker container running apache/superset to another server/ip address on my local network running a MySQL server? In other words I want to connect the apache/superset app that is running on my computer in a docker container, to another computer on my local network that is running a MySQL server. The MySQL sever is not in a docker container.
maybe the steps of this blog can help.
If your mysql is in other docker it it is not 127.0.0.1 and in addition if you don't want the requirements to be updated every time that you git pull a new docker, it is better to use the requirements-local.txt
You should be able to do that but your MySQL has to have external IP that you can access from your Supserset Machine. First do a telnet to see if you can listen from port 3306 to that machine and if you can Supserset should work with very similar URI that you have.

Nifi install using Docker - CanĀ“t access the webserver

I'm new to both docker and Nifi, I found this command that installs via docker and used it on a virtual machine that I have in GCP but I would like to access this container via webserver. In docker ps this appears to me:
What command do I need to execute to gain access to the tool via port 8080?
The container has already exposed port 8080 on the host, as evidence by the output 0.0.0.0:8080->8080/tcp. You read that as {HOST_INTERFACE}:{HOST_PORT}->{CONTAINER_PORT}/{PROTOCOL}.
Navigate to http://SERVER_ADDRESS:8080/ (or maybe http://SERVER_ADDRESS:8080/nifi) using your web browser. You may need to modify the firewall rules applied to your VM to ensure that you can access that port from your local machine.

Resource cannot be found error when accessing a page in Docker Container

I created a asp.net webform project in Visual Studio with Docker support (Windows). When I run the project using Visual Studio page comes up as below
Visual Studio creates a docker image which I saw using command
docker images
See image below (webapplication3)
Now I run another instance of Image (webapplication3) by command
Docker run webapplication3:dev
I can see container running
Docker ps
see image below
But now when I access this new running container using ip http://172.17.183.118/PageA.aspx
it's not coming up, see image below (I have taken IP 172.17.183.118 from docker inspect command, so it is correct.
Can someone tell me why am I not able to view the page? Why is it saying "Resource cannot be found" error?
When you run a Docker container default, the container will run with an internal IP address and an expose port map the local machine port, and the IP address will go out to the internet through the docker bridge which associated with the local machine network interface.
When you access the container inside the local machine, you just need to access the localhost with the port shows you. In your issue, you need to access the address http://localhost:62774/PageA.aspx. If you want to access the container from the Internet, you should access the IP address of your local machine with the port. For you, it means the address http://your-local-machine-public-ip:62774/PageA.aspx.
You can get more details from Docker Network. Also, I suggest you'd better run the container with the special port you plan just like docker run -d -p nodePort:containerPort --name containerName yourImage.

Resources