i have two containers,
container 1 web application
container 2 mysql
am using normal docker file for web application configuration.
when i try to access db connection using web application.
the connection is not working.what will be the issue?
and even i can't ping mysql container ip address, that ip is not reachable why?
but, the db connection is working in local server.
any ideas, will be very helpful.
i want to use two containers
1 is webapplication and another one is mysql
i should make the db connection through mysql container for web application container.
please share you ideas and suggestions
Related
I am trying to add an existing MySQL database as a source database to a docker container running Apache Superset. The MySQL database that I am trying to add is not running in a docker container. It's an existing MySQL database running on a Windows machine.
I've added mysqlclient==1.4.6 to requirements.txt. The error message seems to indicate that the driver is installed.
I've used mysql://user:password#127.0.0.1:3306/database_name and mysql://user:password#localhost:3306/database_name
The error I get is:
"ERROR: Connection failed, please check your connection settings."
I am using image: apache / 'incubator-superset' v. 0.36.0
Are there any settings or config that needs to be changed to be able to communicate to an external database from within a running docker container?
So I figured it out. For Windows, run ipconfig (maybe ifconfig linux, mac) in terminal/powershell and check what ip address docker ethernet port is using (listed as WSL), let's say ip is: 172.x(x).x(x).x(x). Then configure connection string with ip address on docker ethernet port as follows: 'mysql://user:password#172.x(x).x(x).x(x):3306/database_name'.
Follow-up question if anybody knows: How can I connect my docker container running apache/superset to another server/ip address on my local network running a MySQL server? In other words I want to connect the apache/superset app that is running on my computer in a docker container, to another computer on my local network that is running a MySQL server. The MySQL sever is not in a docker container.
maybe the steps of this blog can help.
If your mysql is in other docker it it is not 127.0.0.1 and in addition if you don't want the requirements to be updated every time that you git pull a new docker, it is better to use the requirements-local.txt
You should be able to do that but your MySQL has to have external IP that you can access from your Supserset Machine. First do a telnet to see if you can listen from port 3306 to that machine and if you can Supserset should work with very similar URI that you have.
I have several docker containers running on my local machine. One with SQL Server, one with RabbitMQ, and one with my code. Everything works fine within the docker containers but how can I reference these containers from outside?
I would like to connect to SQL Server with the management studio installed on my desktop. I would also like to hit the RabbitMQ management console from the browser on my desktop.
Inside the container I reference the other containers with a hostname but this is not visible outside of the network. I can connect with the IP but that changes each time I start it.
Is there a way to give each container a hostname that is visible from the host? Is there a better approach?
I am new to Docker and I may not be looking into the right place in the documentation because I couldn't find a way to do what I call "inverse EXPOSE".
So for example, I have one web application that EXPOSE 80. That same application is using a postgresql database. When I am locally developing it works fine because I connect to localhost:5432 but when I containerize the app, it says something like "connection refused". I think the Docker philosophy is to containerize as much as possible and make those containers communicate between each other through a docker network. But I am curious if it actually is possible to say that localhost:5432 in my container actually refers to the port 5432 on the actual machine that hosts my container.
Localhost inside a container is not your docker host, it's a namespaced network inside the container. So if you try to communicate with localhost or 127.0.0.1 inside a container, it's only going to communicate with other apps running inside that container.
Instead, you should use the routeable IP of the host, so that requests can come out of the container and back into the docker host interface to reach applications running outside of a container.
When the app is running in the container you should use the IP:5432 e.g. 192.168.99.100:5432 of the host and not localhost.
When using localhost in the container it refers to localhost (127.0.0.1) of the container and not the one of the host.
My app is running against a mssql server 2012 or above,
I tried to set up 2 containers - 1 for my app and one to be a DB server.
But I couldn't use the DB container due to mssql server version windows image is not supported by my app.
So I'm want to connect to a remote DB server that I have which is a different server than the Docker host.
How do I get the container to ping the remote DB server?
From the container-
C:\Installation>ping my0134.company.net
Ping request could not find host my0134.company.net. Please check the name and try again.
** NOTE - I am using Docker on windows
Maybe you could try adding <IP of my0134.company.net> my0134.company.net to the etc/hosts file. This way the url can be resolved to a IP address. You can also just use
docker run --add-host 'my0134.company.net':<IP of my0134.company.net> <image>
to spin up your container.
If IPV4 forwarding is enabled then container can connect to DB Server.There is no issue with that.
I have a few Django that I want to host on a single docker host running CentOS. I want to have 3 layers
network
application
database
network: I want to have an nginx container in the network layer routing requests to different containers in the application layer. I play to use 1:1 port mappings in this docker container to expose port 80 on the host to the container. Nginx will use direct request to appropriate app in the application layer running on port 8001-8010
application: Ill have several containers each running a seperate django app using Gunicorn running on port 8001-8010
database: one container running MySQL with a different database for each app. The MYSQL container will have a data volume linked to it for persistence.
I understand you can link containers. But as I understand it, I think it relies on the order in which the containers are started ie: how can you link nginx to several containers when they havent been started yet.
So my question is
How do I connect the network layer to the application layer when the number/names of containers in the application is always changing. ie: I might bring a new applcation online/offline. How would I update nginx config and what would the addressing look like?
How do I connect the application layer to the database layer? do I have to use Docker Linking? In my Django application code I need to use the hostname of the database to connect to. What would I put for my hostname of my docker container? Would it be able to resolve?
Is there a reference architecture I could leverage?
Docker does not support dynamic linking but there are some tools that can do this for you, see this SO question.
2.) You could start you database container at first and then link all application containers to the database container. Docker will create the host file at the boot up (statically, if your database container reboots and gets another IP you need dynamlically links, see above). When you link a container like this:
-link db:db
you can access the container with the hostname db.
I ended up using this solution:
https://github.com/blalor/docker-hosts
It allows you to have to refer to other containers on the same host by hostname. It is also dynamic as the /etc/host file on the containers gets updated dynamically as containers go up and down.