I'm trying to split legacy system combined from hbase and php module into two separated containers with the following docker-compose file:
version: '2'
services:
php:
image: my-legacy-php
volumes:
- ~/workspace/php:/workspace/php
ports:
- "80:80"
links:
- hbase
hbase:
image: dajobe/hbase
hostname: hbase-docker
ports:
- "43590-44000:43590-44000"
- "8085:8085"
- "2181:2181"
- "8080:8080"
- "16010:16010"
- "9095:9095"
- "9090:9091"
- "16020:16020"
- "16030:16030"
- "60000:60000"
volumes:
- ~/workspace/hbase-docker/data:/data
I'm using a public hbase-docker image which using port 9090 for thrift while my legacy php module expect to connect via port 9091. I've tried to 'map' or 'forward' within the docker-compose.yml file "9090:9091" without lack. I also tried the expose attribute of docker-compose but it doesn't takes two ports (only one which is exposed to the other containers). How do I make that append?
I want that the listening port 9090 of hbase container will appear as 9091 from the php container (inside)
One of the possible solutions is: Building your own image, with dajobe/hbase as the base image, but modifying the hbase configs and ports exposed using EXPOSE to match your requirements, And then use that image in your compose file.
But this would require you have build and managing the image by yourself.
The solution is to put both services on the same docker network.
Specifically, add this to your docker-compose.yml:
networks:
app_net:
driver: bridge
Then, in each service's config be sure to include:
networks:
- app_net
Finally (and you've already done this), be sure that the correct port mapping is included in the config for hbase:
ports:
- "9090:9091"
Related
I want to create 2 elasticsearch cluster in single docker-compose file, so that I can test few changes only on new es cluster,
My docker-compose file is look like this
version: "2.2"
services:
elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- "9200:9200"
mem_limit: '2048M'
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- "9400:9200"
mem_limit: '2048M'
search:
image: search:latest
entrypoint: java -Delasticsearch.host=elasticsearch-master -DnewElasticsearch.host=new-elasticsearch-master -DnewElasticsearch.port=9400 -jar app.jar
ports:
- "8083:8083"
depends_on:
- elasticsearch-master
- new-elasticsearch-master
mem_limit: '500M'
volumes:
esdata1:
esdata2:
I have 1 java service where I am adding both the host with different environment variable
-Delasticsearch.host=elasticsearch-master
-DnewElasticsearch.host=new-elasticsearch-master
But when I run code from java search service as follow
new RestTemplate().getForEntity("http://elasticsearch-master:9200/_cat/indices?v",String.class)
This gives me correct response.
But when I try to connect to another host on 9400.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9400/_cat/indices?v",String.class)
I am getting Connection Refused error
When I try same host with 9200 then that gives me 200 response.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9200/_cat/indices?v",String.class)
Can someone please tell me how can I make 2 different connection with different port as below.
http://elasticsearch-master:9200
http://new-elasticsearch-master:9400
Thanks
You got the expected behavior. The ports field in docker-compose map the ports to your localhost, which mean that the "old" Elasticsearch will be available via localhost:9200 and the "new" Elasticsearch will be available via localhost:9400.
On the other hand, docker-compose services communicate in an internal network and the service name is the hostname and the port is the original listening port.
Thus, you were able to access (internally) your old one via http://elasticsearch-master:9200 and the new one via http://new-elasticsearch-master:9200.
If you wish to use the new Elasticsearch with 9400 you need to change its settings: http.port. You can do that like:
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
environment:
- http.port=9400
ports:
- "9400:9400"
mem_limit: '2048M'
note that you have to change the port mapping as well (because it will map your new port, 9400 to the localhost 9400).
I'm trying to pass redis url to docker container but so far i couldn't get it to work. I did a little research and none of the answers worked for me.
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
container_name: redis
hostname: redis
expose:
- 6379
links:
- api
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
environment:
- REDIS_URL=redis
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=proxy'
networks:
proxy:
Error: Redis connection to redis failed - connect ENOENT redis
You can only communicate between containers on the same Docker network. Docker Compose creates a default network for you, and absent any specific declaration your redis container is on that network. But you also declare a separate proxy network, and only attach the api container to that other network.
The single simplest solution to this is to delete all of the network: blocks everywhere and just use the default network Docker Compose creates for you. You may need to format the REDIS_URL variable as an actual URL, maybe like redis://redis:6379.
If you have a non-technical requirement to have separate networks, add - default to the networks listing for the api container.
You have a number of other settings in your docker-compose.yml that aren't especially useful. expose: does almost nothing at all, and is usually also provided in a Dockerfile. links: is an outdated way to make cross-container calls, and as you've declared it to make calls from Redis to your API server. hostname: has no effect outside the container itself and is usually totally unnecessary. container_name: does have some visible effects, but usually the container name Docker Compose picks is just fine.
This would leave you with:
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=default'
I have application in c# 'web_dotnet' in one container which downloads data from php service 'web_php' in second container. But what is url for php service? Url 'http://web_php:80' from c# service doesn't work. That is mine docker-compose.yml:
version: '3.5'
services:
web_php:
image: php:7.2.2-apache
container_name: my_php_container
volumes:
- ./php/:/var/www/html/
ports:
- 3000:80
networks:
- mynet
web_dotnet:
build: .
container_name: my_dotnet_container
ports:
- 2000:80
networks:
- mynet
networks:
mynet:
name: xyz_net
driver: bridge
First, you can simplify you file, removing unnecessary network declaration and port exposing. docker-compose creates default user-defined bridge network for you and links all services to it - no need to do it manually. Also inside network all ports are being exposed to services automatically.
Second, remove container_name. You are confusing yourself. Services get their host names equal to service names by default.
version: '3.5'
services:
web_php:
image: php:7.2.2-apache
volumes:
- ./php/:/var/www/html/
web_dotnet:
build: .
Now, after all useless stuff is cleaned, just call web_php:80 from web_dotnet.
After, if you would like to access web_dotnet ** from outside** docker-compose, then you add ports directive to make it visible from host.
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.
I have two services, web and helloworld. The following is my docker-compose YAML file:
version: "3"
services:
helloworld:
build: ./hello
volumes:
- ./hello:/usr/src/app
ports:
- 5001:80
web:
build: ./web
volumes:
- ./web:/usr/share/nginx/html
ports:
- 5000:80
depends_on:
- helloworld
Inside the index.html in web, I made a button that opens http://helloworld when clicked on. However, my button ends up going to helloworld.com instead of the correct service. Both services work fine when I do localhost:5001 and localhost:5000. Am I missing something?
Docker's embedded DNS for service discovery is for container-to-container networking. For connections from outside of docker (e.g. from your browser) you need to publish the port (e.g. 5000 and 5001 in your file) and connect to that published port.
To use the container-to-container networking, you would need the DNS lookup to happen inside of the web container and the connection to go from web to helloworld, instead of from your browser to the container.
Edit: from your comment, you may find a reverse proxy helpful. Traefik and nginx-proxy are two examples out there. You can configure these to forward to containers by hostname or by a virtual path, and in your situation, I think path based routing would be easier. The resulting compose file would look something like:
version: "3"
services:
traefik:
image: traefik
command: --docker --docker.watch
volumes:
- /var/lib/docker.sock:/var/lib/docker.sock
ports:
- 8080:80
helloworld:
build: ./hello
volumes:
- ./hello:/usr/src/app
labels:
- traefik.frontend.rule=PathPrefixStrip:/helloworld
- traefik.port=80
web:
build: ./web
volumes:
- ./web:/usr/share/nginx/html
labels:
- traefik.frontend.rule=PathPrefixStrip:/
- traefik.port=80
The above is all untested off the top of my head configuration, but should get you in the right direction. With the PathPrefixStrip rule, you can make a link in web to "/helloworld" which will go to the other container. And since the link doesn't have a hostname or port, it will go to the same traefik hostname/port you are already using.