I want to setup a local hive server and found this repo:
https://github.com/big-data-europe/docker-hive
This is the yaml file I use.
version: "3"
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8
volumes:
- namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop-hive.env
ports:
- "50070:50070"
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8
volumes:
- datanode:/hadoop/dfs/data
env_file:
- ./hadoop-hive.env
environment:
SERVICE_PRECONDITION: "namenode:50070"
ports:
- "50075:50075"
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
env_file:
- ./hadoop-hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
env_file:
- ./hadoop-hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:50070 datanode:50075 hive-metastore-postgresql:5432"
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
presto-coordinator:
image: shawnzhu/prestodb:0.181
ports:
- "8080:8080"
volumes:
namenode:
datanode:
Error:
Error starting userland proxy: Bind for 0.0.0.0:50075: unexpected error Permission denied
The ports >50000 are blocked on windows, I donĀ“t have admin rights on my company pc, so I tried to map the ports like this:
ports:
- "40070:50070"
environment:
SERVICE_PRECONDITION: "namenode:40070 datanode:40075 hive-metastore-postgresql:5432"
This will let me get the Container started, but the container seem not to be able to communicate.
hive-metastore_1 | [1/100] check for namenode:40070...
hive-metastore_1 | [1/100] namenode:40070 is not available yet
hive-metastore_1 | [1/100] try in 5s once again ...
956a5237dbe2_docker-hive_datanode_1 | [4/100] check for namenode:40070...
956a5237dbe2_docker-hive_datanode_1 | [4/100] namenode:40070 is not available yet
I tried to change both ports:
ports:
- "40070:40070"
This will not work, because some IPs seem to be hardcoded:
ded7410db1b9_docker-hive_namenode_1 | 21/10/08 12:39:05 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
ded7410db1b9_docker-hive_namenode_1 | 21/10/08 12:39:05 INFO http.HttpServer2: Jetty bound to port 50070
Does anyone know how to get this running?
With the following:
ports:
- "40070:50070"
all you are doing is directing traffic from host port 40070 to container port 50070.
So to access "namenode" from the host machine for example:
localhost:40070
And to access "namenode" inside the compose network:
namenode:50070
Service precondition with BDE checks the container and the port repeatedly to see if the service is running before setting up its own services to ensure things are ready first. You have not changed the port running on the container, so your containers should still communicate via port 50070.
You have incorrectly changed the precondition to scan instead for your host port 40070, whereas it should look for the internal network container port 50070 regardless of host port.
Change it to the following:
ports:
- "40070:50070"
environment:
SERVICE_PRECONDITION: "namenode:50070 datanode:50075 hive-metastore-postgresql:5432"
You can change the operating ports on Hive etc. with the environmental variable file provided, but you shouldn't need to. Exposing host port 40070 to container port 50070 has no impact on the operation of the docker services.
Related
I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?
This is my docker-compose-proxy.yml
version: "3.7"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
- static_data:/vol/web
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=supersecretpassword
- ALLOWED_HOSTS=127.0.0.1
depends_on:
- db
proxy:
image : proxy:latest
depends_on:
- app
ports:
- "8000:8000"
volumes:
- static_data:/vol/static_data
db:
image: postgres:10-alpine
environment:
- POSTGRES_DB=app
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=supersecretpassword
volumes:
static_data:
I checked the port before I run my command
netstat -ltnp | grep ':8000'
and port was not occupied.
when I go for
docker-compose -f docker-compose-proxy.yml up
I got error
ERROR: for 9bac48e03668_recipe-app-api-devops_proxy_1 Cannot start service proxy: driver failed programming external connectivity on endpoint recipe-app-api-devops_proxy_1 (af5860c135cb37026dcac6ce27151cd4e8448eaddc542d50dcd009c0e24c09fa): Bind for 0.0.0.0:8000 failed: port is already allocated
Why? How to resolve this issue?
You specified port 8000 at
ports:
- "8000:8000"
Since this port is already used for something, you get the error that it's already allocated. So, you will need to find out what is using port 8000 and either change the port of your container, stop the other process, or change the other process's port.
You're trying to bind host port 8000 to two different things:
services:
app:
ports:
- "8000:8000"
proxy:
ports:
- "8000:8000"
So this tells Compose to try to route host port 8000 to the app container, and also to route host port 8000 to the proxy container, and it can't do both. That's essentially the error you're getting.
If you want all requests to your system to go through the proxy container, you can just delete the ports: block from the app container. It will still be visible from other containers in the same Compose file via http://app:8000 but it won't be reachable from outside Docker.
If you need both containers to be accessible, you need to change the first ports: number, but not the second, on one or the other of the containers.
ports:
- '8001:8000' # host port 8001 -> container port 8000
This won't affect connections between containers at all; regardless of what ports: are or aren't present, they will always use the "standard" port number for the container they're trying to connect to.
I am trying to set up a Learning Locker server within Docker (on Windows 10, Docker using WSL for emulation) using the repo from michzimney. This service is composed of several Docker containers (Mongo, Redis, NGINX, etc) networked together. Using the provided docker-compose.yml file I have been able to set up the service and access it from localhost, but I cannot access the server from any machine on the rest of my home network.
This is a specific case, but some guidance will be valuable as I am very new to Docker and will need to build many such environments in the future, for now in Windows but later in Docker on Synology, where the services can be access from network and internet.
My research has led me to user-defined bridging using docker -p [hostip]:80:80 but this didn't work for me. I have also turned off Windows firewall since that seems to cause a host of issues for some but still no effect. I tried to bridge my virtual switch manager for WSL using Windows 10 Hyper-V manager, but that didn't work, and I have tried bridging the WSL connector to LAN using basic Windows 10 networking but that didn't work and I had to reset my network.
So the first question is is this a Windows networking issue or a
Docker configuration issue?
The second question, assuming it's a
Docker configuration issue, is how can I modify the following YML
file to make the service accessible to the outside network:
version: '2'
services:
mongo:
image: mongo:3.4
restart: unless-stopped
volumes:
- "${DATA_LOCATION}/mongo:/data/db"
redis:
image: redis:4-alpine
restart: unless-stopped
xapi:
image: learninglocker/xapi-service:2.1.10
restart: unless-stopped
environment:
- MONGO_URL=mongodb://mongo:27017/learninglocker_v2
- MONGO_DB=learninglocker_v2
- REDIS_URL=redis://redis:6379/0
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/xapi-storage:/usr/src/app/storage"
api:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node api/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
ui:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "./entrypoint-ui.sh"
restart: unless-stopped
depends_on:
- mongo
- redis
- api
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
- "${DATA_LOCATION}/ui-logs:/opt/learninglocker/logs"
worker:
image: michzimny/learninglocker2-app:${DOCKER_TAG}
environment:
- DOMAIN_NAME
- APP_SECRET
- SMTP_HOST
- SMTP_PORT
- SMTP_SECURED
- SMTP_USER
- SMTP_PASS
command: "node worker/dist/server"
restart: unless-stopped
depends_on:
- mongo
- redis
volumes:
- "${DATA_LOCATION}/app-storage:/opt/learninglocker/storage"
nginx:
image: michzimny/learninglocker2-nginx:${DOCKER_TAG}
environment:
- DOMAIN_NAME
restart: unless-stopped
depends_on:
- ui
- xapi
ports:
- "443:443"
- "80:80"
So far I have attempted to change the ports option to the following:
ports:
- "192.168.1.102:443:443"
- "192.168.1.102:80:80"
But then the container wasn't even accessible from the host machine anymore. I also tried adding network-mode=host under the nginx service but the build failed saying it was not compatible with port mapping. Do I need to set network-mode=host for every service or is the problem something else entirely?
Any help is appreciated.
By the looks of your docker-compose.yml, you are exposing ports 80 & 443 to your host (Windows machine). So, if your windows IP is 192.168.1.102 - you should be able to reach http://192.168.1.102 & https://192.168.1.102 on your LAN if there is nothing blocking it (firewall etc.).
You can confirm that you are indeed listening on those ports by running 'netstat -a' and checking to see if you are LISTENING on those ports.
I set up a Docker network with a db container, a nextcloud container, and a nginx container. I can access the nextcloud website with 'ip-adress':8080, but I want to access it without specifying port 8080. How can I do that?
This is my docker-compose.yml:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:fpm
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
web:
image: nginx
restart: always
ports:
- 8080:80
links:
- app
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
volumes_from:
- app
What you want is to avoid having to specify the port when you request a URI. One way to do that is to use the default port for the protocol you are using (80 for HTTP, 443 for https, 21 for FTP, etc). Then rely on your client to automatically fallback to the default port.
In a Docker Compose configuration file, the syntax for exposing a port is defined as such: <host_port>:<container_port> (see the documentation). That means 8080:80 exposes port 80 from the container on your docker host on port 8080.
In your case, the service is exposing an HTTP server, which means you have to change it to the default port 80 in order to omit it. Update web.services.ports[0] from 8080:80 to 80:80, and you will be able to access nextcloud from 'ip-adress'.
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.