Hue access to HDFS: bypass default hue.ini? - docker

The set up
I am trying to compose a lightweight minimal hadoop stack with the images provided by bde2020 (learning purpose). Right now, the stack includes (among others)
a namenode
a datanote
hue
Basically, I started from Big Data Europe official docker compose, and added a hue image based on their documentation
The issue
Hue's file browser can't access HDFS:
Cannot access: /user/dav. The HDFS REST service is not available. Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".
HTTPConnectionPool(host='namenode', port=50070): Max retries exceeded with url: /webhdfs/v1/user/dav?op=GETFILESTATUS&user.name=hue&doas=dav (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f8119a3cf10>: Failed to establish a new connection: [Errno 111] Connection refused',))
What I tried so far to delimit the issue
to explicitly putted all the services on the same network
to point dfs_webhdfs_url to localhost:9870/webhdfs/v1 in the namenode env file (source) and edit hue.ini in hue's container accordingly (by adding webhdfs_url=http://namenode:9870/webhdfs/v1)
when I log into hue's container, I can see that namenode's port 9870 is open (nmap -p 9870 namenode). 50070 is not. I don't think that my issue is network related. Despite editing hue.ini, Hue still go for port 50070. So, how can I force hue to go for port 9870 in my current setup? (if this is the reason)
docker-compose
version: '3.7'
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.1.1-java8
container_name: namenode
hostname: namenode
domainname: hadoop
ports:
- 9870:9870
volumes:
- hadoop_namenode:/hadoop/dfs/name
- ./entrypoints/namenode/entrypoint.sh:/entrypoint.sh
env_file:
- ./hadoop.env
- .env
networks:
- hadoop_net
# TODO adduser --ingroup hadoop dav
datanode1:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.1.1-java8
container_name: datanode
hostname: datanode1
domainname: hadoop
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
env_file:
- ./hadoop.env
networks:
- hadoop_net
resourcemanager:
image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.1.1-java8
container_name: resourcemanager
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9864"
env_file:
- ./hadoop.env
networks:
- hadoop_net
nodemanager1:
image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.1.1-java8
container_name: nodemanager
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9864 resourcemanager:8088"
env_file:
- ./hadoop.env
networks:
- hadoop_net
historyserver:
image: bde2020/hadoop-historyserver:2.0.0-hadoop3.1.1-java8
container_name: historyserver
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9864 resourcemanager:8088"
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
networks:
- hadoop_net
filebrowser:
container_name: hue
image: bde2020/hdfs-filebrowser:3.11
ports:
- "8088:8088"
env_file:
- ./hadoop.env
volumes: # BYPASS DEFAULT webhdfs url
- ./overrides/hue/hue.ini:/opt/hue/desktop/conf.dist/hue.ini
environment:
- NAMENODE_HOST=namenode
networks:
- hadoop_net
networks:
hadoop_net:
volumes:
hadoop_namenode:
hadoop_datanode:
hadoop_historyserver:

I was able to get the Filebrowser working with this INI
[desktop]
http_host=0.0.0.0
http_port=8888
time_zone=America/Chicago
dev=true
app_blacklist=impala,zookeeper,oozie,hbase,security,search
[hadoop]
[[hdfs_clusters]]
[[[default]]]
fs_defaultfs=hdfs://namenode:8020
webhdfs_url=http://namenode:50070/webhdfs/v1
security_enabled=false
And this compose
version: "2"
services:
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
ports:
- 8020:8020
- 50070:50070
# - 59050:59050
volumes:
- hadoop_namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop.env
networks:
- hadoop
datanode1:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode1
ports:
- 50075:50075
# - 50010:50010
# - 50020:50020
depends_on:
- namenode
volumes:
- hadoop_datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
networks:
- hadoop
hue:
image: gethue/hue
container_name: hue
ports:
- 8000:8888
depends_on:
- namenode
volumes:
- ./conf/hue.ini:/hue/desktop/conf/pseudo-distributed.ini
networks:
- hadoop
- frontend
volumes:
hadoop_namenode:
hadoop_datanode1:
networks:
hadoop:
frontend:
hadoop.env has to add hue as a proxy user as well
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
CORE_CONF_hadoop_http_staticuser_user=root
CORE_CONF_hadoop_proxyuser_hue_hosts=*
CORE_CONF_hadoop_proxyuser_hue_groups=*
HDFS_CONF_dfs_replication=1
HDFS_CONF_dfs_webhdfs_enabled=true
HDFS_CONF_dfs_permissions_enabled=false

Yeah, found it. A few key elements:
in hadoop 3.*, webhdfs no longer listen to 50070 but 9870 is the standard port
overriding hue.ini involves to mount a file named hue-overrides.ini
the hue image from gethue is more up to date than the one from bde2020 (their hadoop stack rocks, though)
Docker-compose
version: '3.7'
services:
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.1.1-java8
container_name: namenode
ports:
- 9870:9870
- 8020:8020
volumes:
- hadoop_namenode:/hadoop/dfs/name
- ./overrides/namenode/entrypoint.sh:/entrypoint.sh
env_file:
- ./hadoop.env
- .env
networks:
- hadoop
filebrowser:
container_name: hue
image: gethue/hue:4.4.0
ports:
- "8000:8888"
env_file:
- ./hadoop.env
volumes: # HERE
- ./overrides/hue/hue-overrides.ini:/usr/share/hue/desktop/conf/hue-overrides.ini
depends_on:
- namenode
networks:
- hadoop
- frontend
datanode1:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.1.1-java8
container_name: datanode1
volumes:
- hadoop_datanode:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
env_file:
- ./hadoop.env
networks:
- hadoop
resourcemanager:
image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.1.1-java8
container_name: resourcemanager
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode1:9864"
env_file:
- ./hadoop.env
networks:
- hadoop
nodemanager1:
image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.1.1-java8
container_name: nodemanager
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode1:9864 resourcemanager:8088"
env_file:
- ./hadoop.env
networks:
- hadoop
historyserver:
image: bde2020/hadoop-historyserver:2.0.0-hadoop3.1.1-java8
container_name: historyserver
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode1:9864 resourcemanager:8088"
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
networks:
- hadoop
networks:
hadoop:
frontend:
volumes:
hadoop_namenode:
hadoop_datanode:
hadoop_historyserver:
hadoop.env
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
CORE_CONF_hadoop_http_staticuser_user=root
CORE_CONF_hadoop_proxyuser_hue_hosts=*
CORE_CONF_hadoop_proxyuser_hue_groups=*
CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
HDFS_CONF_dfs_replication=1
HDFS_CONF_dfs_webhdfs_enabled=true
HDFS_CONF_dfs_permissions_enabled=false
HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
hue-overrides.ini
[desktop]
http_host=0.0.0.0
http_port=8888
time_zone=France
dev=true
app_blacklist=impala,zookeeper,oozie,hbase,security,search
[hadoop]
[[hdfs_clusters]]
[[[default]]]
fs_defaultfs=hdfs://namenode:8020
webhdfs_url=http://namenode:9870/webhdfs/v1
security_enabled=false
Thanks #cricket_007

Related

Got problem deploying docker-compose service(port issue)

I want to deploy a service that will allow me to use Spark and MongoDB in a Jupiter notebook.
I use docker-compose to build up the service, and it`s as followed:
version: "3.3"
volumes:
shared-workspace:
networks:
spark-net:
driver: bridge
services:
spark-master:
image: uqteaching/cloudcomputing:spark-master-v1
container_name: spark-master
networks:
- "spark-net"
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-1:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-1
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
spark-worker-2:
image: uqteaching/cloudcomputing:spark-worker-v1
container_name: spark-worker-2
depends_on:
- spark-master
networks:
- "spark-net"
ports:
- "8082:8082"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
mongo:
image: mongo
container_name: 'mongo'
networks:
- "spark-net"
ports:
- "27017:27017"
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8091:8091"
jupyter-notebook:
container_name: jupyternb
image: jupyter/all-spark-notebook:42f4c82a07ff
depends_on:
- mongo
- spark-master
links:
- mongo
expose:
- "8888"
networks:
- "spark-net"
ports:
- "8888:8888"
volumes:
- ./nbs:/home/jovyan/work/nbs
- ./events:/tmp/spark-events
environment:
- "PYSPARK_PYTHON=/usr/bin/python3"
- "PYSPARK_DRIVER_PYTHON=/usr/bin/python3"
command: "start-notebook.sh \
--ip=0.0.0.0 \
--allow-root \
--no-browser \
--notebook-dir=/home/jovyan/work/nbs \
--NotebookApp.token='' \
--NotebookApp.password=''
"
And the result is something like this:
I dont know why. Even I set these 2` services to listen to a different port.
They are using 8081/tcp at the same time, which caused them both to crash.
I want to solve this.
mongo-express seems to need port 8081 internal, so use another external port to be able to login to the webui.
http://localhost:8092 would then be something like this:
mongo_admin:
image: mongo-express
container_name: 'mongoadmin'
networks:
- "spark-net"
depends_on:
- mongo
links:
- mongo
ports:
- "8092:8091"

The docker Container of caddy is in restarting state

This is docker-compose file that starts the containers all are working fine except the caddy.
version: '3'
services:
db:
image: postgres:latest
restart: always
expose:
- "5555"
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=chiefonboarding
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- global
web:
image: chiefonboarding/chiefonboarding:latest
restart: always
expose:
- "9000"
environment:
- SECRET_KEY=somethingsupersecret
- BASE_URL=https://on.hr.gravesfoods.com
- DATABASE_URL=postgres://postgres:postgres#db:5432/chiefonboarding
- ALLOWED_HOSTS=on.hr.gravesfoods.com
- DEFAULT_FROM_EMAIL=hello#gravesfoods.com
depends_on:
- db
networks:
- global
caddy:
image: caddy:2.3.0-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- $PWD/Caddyfile:/etc/caddy/Caddyfile
- $PWD/site:/srv
- caddy_data:/data
- caddy_config:/config
networks:
- global
volumes:
pgdata:
caddy_data:
caddy_config:
networks:
global:
Also these are the logs it is generating:
[https://on.hr.gravesfoods.com:80] scheme and port violate convention "level":"info","ts":1656425557.6256478,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile" run: adapting config using caddyfile: server block 0, key 0 (https://on.hr.gravesfoods.com:80): determining listener address: [https://on.hr.gravesfoods.com:80] scheme and port violate convention.

Hive-metastore not finding Hadoop Datanode

I have an Hadoop cluster with one namenode and one datanode instantiated with docker compose. In addition I am trying to launch Hive but the Hive-metastore seems not to find my datanode even if this one is up and running, infact checking the log it shows:
namenode:9870 is available.
check for datanode:9871...
datanode:9871 is not available yet
try in 5s once again ...
Here is my docker-compose.yml
#HADOOP
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
container_name: namenode
restart: always
expose:
- "9870"
- "54310"
- "9000"
ports:
- 9870:9870
- 9000:9000
volumes:
- ./data/hadoop_data/:/hadoop_data
environment:
- CLUSTER_NAME=test
- CORE_CONF_fs_defaultFS=hdfs://namenode:9000
- CORE_CONF_hadoop_http_staticuser_user=root
- CORE_CONF_hadoop_proxyuser_hue_hosts=*
- CORE_CONF_hadoop_proxyuser_hue_groups=*
- CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec
- HDFS_CONF_dfs_webhdfs_enabled=true
- HDFS_CONF_dfs_permissions_enabled=false
- HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
- HDFS_CONF_dfs_safemode_threshold_pct=0
datanode:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode
restart: always
expose:
- "9871"
environment:
SERVICE_PRECONDITION: "namenode:9870"
ports:
- "9871:9871"
env_file:
- hive.env
hive-server:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-server
volumes:
- ./employee:/employee
env_file:
- hive.env
environment:
HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
SERVICE_PRECONDITION: "hive-metastore:9083"
depends_on:
- hive-metastore
ports:
- "10000:10000"
hive-metastore:
image: bde2020/hive:2.3.2-postgresql-metastore
container_name: hive-metastore
env_file:
- hive.env
command: /opt/hive/bin/hive --service metastore
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode:9871 hive-metastore-postgresql:5432"
depends_on:
- hive-metastore-postgresql
ports:
- "9083:9083"
hive-metastore-postgresql:
image: bde2020/hive-metastore-postgresql:2.3.0
container_name: hive-metastore-postgresql
volumes:
- ./metastore-postgresql/postgresql/data:/var/lib/postgresql/data
depends_on:
- datanode

Wrong HDFS Configured Capacity in Docker Stack

I'm using a Docker stack that implements, in the same machine, an Hadoop Namenode, two Datanodes, two Node Managers, a Resource Manager, a History Server, and other technologies.
I encountered an issue related to the HDFS Configured Capacity that is shown in the HDFS UI.
I'm using a machine with 256GB capacity, and I'm using the two datanodes implementation mentioned above. Instead of distributing the total capacity between the two nodes, HDFS duplicates the capacity of the entire machine by giving 226.87GB to each datanode.
As you can see here.
Any thoughts on how to make HDFS show the right capacity?
Here is the portion of the docker compose that implements the hadoop technologies mentioned above.
services:
# Hadoop master
namenode:
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
container_name: namenode
ports:
- 9870:9870
- 8020:8020
volumes:
- ./namenode/home/${ADMIN_NAME:?err}:/home/${ADMIN_NAME:?err}
- ./namenode/hadoop-data:/hadoop-data
- ./namenode/entrypoint.sh:/entrypoint.sh
- hadoop-namenode:/hadoop/dfs/name
env_file:
- ./hadoop.env
- .env
networks:
- hadoop
resourcemanager:
restart: always
image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8
container_name: resourcemanager
ports:
- 8088:8088
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode1:9864"
env_file:
- ./hadoop.env
networks:
- hadoop
# Hadoop slave 1
datanode1:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode1
volumes:
- hadoop-datanode-1:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
env_file:
- ./hadoop.env
networks:
- hadoop
nodemanager1:
image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
container_name: nodemanager1
volumes:
- ./nodemanagers/entrypoint.sh:/entrypoint.sh
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode1:9864 resourcemanager:8088"
env_file:
- ./hadoop.env
- .env
networks:
- hadoop
# Hadoop slave 2
datanode2:
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
container_name: datanode2
volumes:
- hadoop-datanode-2:/hadoop/dfs/data
environment:
SERVICE_PRECONDITION: "namenode:9870"
env_file:
- ./hadoop.env
networks:
- hadoop
nodemanager2:
image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
container_name: nodemanager2
volumes:
- ./nodemanagers/entrypoint.sh:/entrypoint.sh
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode2:9864 resourcemanager:8088"
env_file:
- ./hadoop.env
- .env
networks:
- hadoop
historyserver:
image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8
container_name: historyserver
ports:
- 8188:8188
environment:
SERVICE_PRECONDITION: "namenode:9870 datanode1:9864 datanode2:9864 resourcemanager:8088"
volumes:
- hadoop-historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
networks:
- hadoop
You will need to create the docker volumes with a defined size that fits on your machine and then ask each DN to use that volume. Then when the DN inspects the size of its volumes, it should return the size of the volume rather than the capacity of your entire machine and use that for the capacity.

Creating spark cluster with drone.yml not working

I have docker-compose.yml with below image and configuration
version: '3'
services:
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
here the docker-compose up log ---> https://jpst.it/1Xc4K
and here containers up and running and i mean spark worker connected to spark master without any issues , now problem is i created drone.yml and where i added services component with
services:
jce-cassandra:
image: cassandra:3.0
ports:
- "9042:9042"
jce-elastic:
image: elasticsearch:5.6.16-alpine
ports:
- "9200:9200"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
janusgraph:
image: janusgraph/janusgraph:latest
ports:
- "8182:8182"
environment:
JANUS_PROPS_TEMPLATE: cassandra-es
janusgraph.storage.backend: cql
janusgraph.storage.hostname: jce-cassandra
janusgraph.index.search.backend: elasticsearch
janusgraph.index.search.hostname: jce-elastic
depends_on:
- jce-elastic
- jce-cassandra
spark-master:
image: bde2020/spark-master:2.4.4-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
but here spark worker is not connected to spark master getting exceptions, here is exception log details , can some one please guide me why am facing this issue
Note : I am trying to create these services in drone.yml for my integration testing
Answering for better formatting. The comments suggest sleeping. Assuming this is the dockerfile (https://hub.docker.com/r/bde2020/spark-worker/dockerfile) You could sleep by adding the command:
spark-worker-1:
image: bde2020/spark-worker:2.4.4-hadoop2.7
container_name: spark-worker-1
command: sleep 10 && /bin/bash /worker.sh
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
Although sleep 10 is probably excessive, if this would would sleep 5 or sleep 2

Resources