the second wordpress is not working in docker - docker

I am learning the docker technology.
I run two wordpress in docker at one host. but the second wordpress is not good for working.
I run one mysql server in docker and the two wordpresss share the sample mysql.
run docker server command at below:
sudo docker run --name mysql_db -e MYSQL_ROOT_PASSWORD=xxxx -d mysql
sudo docker run --name wordpress1 -e WORDPRESS_DB_NAME=wordpress1 --link
mysql_db:mysql -p 8008:80 -d wordpress
sudo docker run --name wordpress2 -e WORDPRESS_DB_NAME=wordpress2 --link
mysql_db:mysql -p 8009:80 -d wordpress
when I get the ip:8008 in IE, it is good,but get ip:8009, it redirect the ip:8008, i cann't get webpage from 8009 port.
So i look the second wordpress log, it show the http 302 error.
when I modify the 8009 to the 9009 and run the mysql and two wordpress in docker again, the second wordpress server is good, I can get webpage from ip:9009.
my mysql and wordpress image pull from the default office site.
so i cann't know when i modify the port 8009 to 9009, the second wordpress is working good. i cann't find the result through search.
docker --version
Docker version 17.06.0-ce, build 02c1d87
uname -a
Linux linux-1 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux
thks.

I think i find the reason.
I try more other ports to wordpress mapping , only the 8009 is not ok.
Because my docker is in public cloud host, i must through the firewall to get my web server, so i think the 8009 is reflected to 8008 in firewall.
It is not issue of docker and wordpress.
thanks all.

Related

How to connect to localhost from a docker container?

I have a Python application to expose a REST API. The Python server is running on http://127.0.0.1:5000
I have another application written in NodeJS to wrap the API coming from Python and expose another API as a passthrough. The Node server is running on http://localhost:8080
I'm new to Docker, I'm building a docker image for the Node application (in MacOS Silicon). The problem is when I invoke the API using curl http://localhost:8080 The docker desktop says "Connection refused: /127.0.0.1:5000"
The following is the docker run command I'm using
docker run --platform linux/amd64 -d -v ./Config.toml -p 8080:8080 myApp/app:v0.1.0
I tried with the --network host flag with the docker run command, but it doesn't work since it ignores all the declared ports. And I tried with the http.server --bind 0.0.0.0 with the docker run command, but the result says "pull access denied for http.server"
How can I solve this?
curl http://127.0.0.1:5000 on host machine to see if server is actually running.
Check server accessibility from within container. Simply run the docker container on interactive it and try access the server from the container.
docker run --platform linux/amd64 -it myApp/app:v0.1.0 /bin/bash
curl http://127.0.0.1:5000
If this does not work, it's a network configuration issue in the container you should set up to allow access to host machine.
If it works, then bind NodeJs app to 0.0.0.0 to allow conns from any ip.
docker run --platform linux/amd64 -d -v ./Config.toml -p 8080:8080 myApp/app:v0.1.0 -host 0.0.0.0

How to integrate Elassandra in my Jhipster app by Docker Image?

I want to integrate elassandra in my jhipster app. In which i'm using cassandra as db.
i'm following the official elassandra installation process with docker image but their is confusion to understand which container_name have to add in which command.
here is official link: http://doc.elassandra.io/en/latest/installation.html#docker-image
and my port 9200 is not enabled
docker run --name some-elassandra -d strapdata/elassandra
docker run --name some-app --link some-elassandra:elassandra -d app-that-uses-elassandra
Elasticsearch ports 9200 and 9300 are exposed for communication between containers.
So when an elassandra container has been started like this :
docker run --name some-elassandra -d strapdata/elassandra
Contacting the REST API could be done with something like :
docker run -it --link some-elassandra --rm strapdata/elassandra curl some-elassandra:9200
If it still not working, be sure you pulled a recent version of the image, and feel free to open an issue on the github repository strapdata/docker-elassandra
This elassandra image is based on the official cassandra image. You may refer to its documentation for advanced setup.

Docker container cannot connect to linked containers services

I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.

Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.

How to routing http access to multiple docker container

How to routing http access for any domain to their specific docker container. So,
any request for:
web1.mydomain.com is for docker container with id asda912kas
web2.mydomain.com is for docker container with id: 8uada0a9sd
etc
Every docker container is running apache, mysql, and wordpress or other web apps. web1.mydomain.com and web2.mydomain.com is using same public IP Address (like apache vhost does)
[sorry for my poor english]
If your web containers are run on the same machine, you can use jwilder/nginx-proxy (https://github.com/jwilder/nginx-proxy)
You run it with port 80 mapped:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
And then you run your web containers with environment variable VIRTUAL_HOST:
docker run -d -e VIRTUAL_HOST=web1.mydomain.com image1
docker run -d -e VIRTUAL_HOST=web2.mydomain.com image2
This works well for small deployments.

Resources