Running a script for wget some resources when openwrt start - openwrt

!/bin/sh /etc/rc.common
START=95
STOP=10
start(){
while true
do
exist=$(ping -c 2 www.baidu.com |wc -l)
if [ $exist -ne 0 ];then
break
fi
done
wget -O /zhuye.html http://www.baidu.com
}
When openwrt restart,I want the system run the script,but wget doesn't work,Why?

I solved it,just "changed wget -O /zhuye.html http://www.baidu.com" to "wget -O /zhuye.html http://www.baidu.com >/dev/null 2>&1"

Related

Error when building docker image for jupyter spark notebook

I am trying to build Jupyter notebook in docker following the guide here:
https://github.com/cordon-thiago/airflow-spark
and got an error with exit code: 8.
I ran:
$ docker build --rm --force-rm -t jupyter/pyspark-notebook:3.0.1 .
the building stops at the code:
RUN wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz\?as_json | \
python -c "import sys, json; content=json.load(sys.stdin); print(content['preferred']+content['path_info'])") && \
echo "${spark_checksum} *spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" | sha512sum -c - && \
tar xzf "spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" -C /usr/local --owner root --group root --no-same-owner && \
rm "spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz"
with error message like below:
=> ERROR [4/9] RUN wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-3.0.1/spark-3.0.1-bin-hadoop2.7.tgz?as_json | python -c "import sys, json; content=json.load(sys.stdin); 2.3s
------
> [4/9] RUN wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-3.0.1/spark-3.0.1-bin-hadoop2.7.tgz?as_json | python -c "import sys, json; content=json.load(sys.stdin); print(content[
'preferred']+content['path_info'])") && echo "F4A10BAEC5B8FF1841F10651CAC2C4AA39C162D3029CA180A9749149E6060805B5B5DDF9287B4AA321434810172F8CC0534943AC005531BB48B6622FBE228DDC *spark-3.0.1-bin-hadoop2.7.
tgz" | sha512sum -c - && tar xzf "spark-3.0.1-bin-hadoop2.7.tgz" -C /usr/local --owner root --group root --no-same-owner && rm "spark-3.0.1-bin-hadoop2.7.tgz":
------
executor failed running [/bin/bash -o pipefail -c wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz\
?as_json | python -c "import sys, json; content=json.load(sys.stdin); print(content['preferred']+content['path_info'])") && echo "${spark_checksum} *spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_
VERSION}.tgz" | sha512sum -c - && tar xzf "spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" -C /usr/local --owner root --group root --no-same-owner && rm "spark-${APACHE_SPARK_VERSION}
-bin-hadoop${HADOOP_VERSION}.tgz"]: exit code: 8
Really appreciate if someone can enlighten me on this. Thanks!
The exit code 8 is likely from wget meaning an error response from the server. As an example, this path that the Dockerfile tries to wget from isn't valid anymore: https://www.apache.org/dyn/closer.lua/spark/spark-3.0.1/spark-3.0.1-bin-hadoop2.7.tgz
From the issues on the repo, it appears that Apache version 3.0.1 is no longer valid so you should override the APACHE_SPARK version to 3.0.2 with a --build-arg:
docker build --rm --force-rm \
--build-arg spark_version=3.0.2 \
-t jupyter/pyspark-notebook:3.0.2 .
EDIT
See comment below for more, the command that worked was:
docker build --rm --force-rm \
--build-arg spark_version=3.1.1 \
--build-arg hadoop_version=2.7 \
-t jupyter/pyspark-notebook:3.1.1 .
And updated spark checksum to reflect the version for 3.1.1: https://downloads.apache.org/spark/spark-3.1.1/spark-3.1.1-bin-hadoop2.7.tgz.sha512
For this answer to be relevant in the future, it will likely need to update versions and checksum again for the latest spark/hadoop versions.

Dockerfile build successfully but container can not run

I wrote a dockerfile for varnish plus. Docker build execute successfully but on docker run says /bin/sh: 1: ./init: not found not found. What am i missing on dockerfile?
I'm trying to build a custom docker build for Kubernetes varnish deployment.
I tried another parameters like CMD["sh", "init"] then i got ./start-agent failed. If I put sh to everywhere not found on /etc/default/varnish error. Also got init done error it says expecting "then". I installed on bare metal in the same way but couldn't run on a docker container.
FROM ubuntu:16.04
ARG varnishFile
ARG tokenName
ARG Project
ARG varnishPlusCredential="xxx"
RUN echo " $tokenName, $Project, $varnishFile "
RUN apt-get update
RUN apt-get -y install \
git \
python \
apt-transport-https \
wget \
curl \
gnupg2 \
libmicrohttpd10 \
libssl1.0.0 \
vim \
telnet
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-5.x.list
RUN curl https://${varnishPlusCredential}#repo.varnish-software.com/GPG-key.txt | apt-key add -
RUN echo "deb http://${varnishPlusCredential}#repo.varnish-software.com/ubuntu trusty varnish-4.1-plus" >> /etc/apt/sources.list
RUN echo "deb http://${varnishPlusCredential}#repo.varnish-software.com/ubuntu trusty non-free" >> /etc/apt/sources.list
RUN echo " #apt-get update "
RUN apt-get update -y
RUN apt-get -y install \
varnish-plus \
varnish-plus-ha \
varnish-agent \
filebeat \
varnishtuner
RUN vha-generate-vcl --token ${tokenName} > /etc/varnish/vha.vcl
COPY /${Project}/varnishConfiguration/nodes.conf /etc/varnish/nodes.conf
COPY /${Project}/varnishConfiguration/default.vcl /etc/varnish/vcl/default.vcl
COPY /${Project}/varnishConfiguration/varnish /etc/default/varnish
COPY /${Project}/varnishConfiguration/varnishncsa /etc/default/varnishncsa
COPY /"${Project}"/varnishConfiguration/varnishncsa-init.d/varnishncsa /etc/init.d
#Copy varnish configuration varnish files for varnish nodes
COPY /${Project}/${varnishFile}/varnish-agent /etc/default/varnish-agent
COPY /${Project}/${varnishFile}/vha-agent /etc/default/vha-agent
COPY /${Project}/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
COPY /scripts/start-varnish-agent.sh /start-varnish-agent
COPY /scripts/start-varnish.sh /start-varnish
COPY /scripts/start-vha-agent.sh /start-vha-agent
COPY /scripts/start-varnishncsa.sh /start-varnishncsa
COPY /scripts/start-filebeat.sh /start-filebeat
COPY /scripts/init.sh /init
#Executive permisson to startup scripts
RUN chmod +x /init \
/start-varnish-agent \
/start-varnish \
/start-vha-agent \
/start-varnishncsa \
/etc/init.d/varnishncsa \
/start-filebeat
EXPOSE 80
EXPOSE 6082
EXPOSE 6085
CMD ./init
My init.sh file is located under scripts folder on same location with dockerfile.
#!/bin/bash
# Start the varnish service
./start-varnish
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnish service: $status"
exit $status
fi
# Start the vha-agent
./start-vha-agent
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start vha-agent: $status"
exit $status
fi
# Start the varnish-agent
./start-varnish-agent
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnish-agent: $status"
exit $status
fi
# Start the varnishncsa
./start-varnishncsa
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnishncsa: $status"
exit $status
fi
# Start the filebeat
./start-filebeat
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start filebeat: $status"
exit $status
fi
while sleep 60; do
ps aux |grep varnishd |grep -v "grep"
PROCESS_1_STATUS=$?
ps aux |grep vha-agent |grep -v "grep"
PROCESS_2_STATUS=$?
ps aux |grep varnish-agent |grep -v "grep"
PROCESS_3_STATUS=$?
ps aux |grep varnishncsa |grep -v "grep"
PROCESS_4_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 -o $PROCESS_3_STATUS -ne 0 -o $PROCESS_4_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Varnish Software has an official Varnish Cache Plus Docker image. If you have a subscription, which I suspect you have, you can get help from support via support#varnish-software.com.
Support can have a look at your Dockerfile and advise you, but they can also explain how you can use the official image to get the job done, without having to maintain the Dockerfile yourself.

Docker container healthcheck stop unhealthy container

I have a docker container that has a healthcheck running every 1 min. I read that appending "|| kill 1" to the healthcheck in dockerfile can stop the container after healthcheck fails, but it does not seem to be working for me and I cannot find an example that works.
Does anybody know how I can stop the container after marked as unhealthy? I currently have this in my dockerfile:
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 CMD bash /expressvpn/healthcheck.sh || kill 1
EDIT 1
Dockerfile
FROM debian:buster-slim
ENV CODE="code"
ENV SERVER="smart"
ARG VERSION="expressvpn_2.6.0.32-1_armhf.deb"
COPY files/ /expressvpn/
RUN apt-get update && apt-get install -y --no-install-recommends \
expect curl ca-certificates iproute2 wget jq \
&& wget -q https://download.expressvpn.xyz/clients/linux/${VERSION} -O /expressvpn/${VERSION} \
&& dpkg -i /expressvpn/${VERSION} \
&& rm -rf /expressvpn/*.deb \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge --autoremove -y wget \
&& rm -rf /var/log/*.log
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 CMD bash /expressvpn/healthcheck.sh || exit 1
ENTRYPOINT ["/bin/bash", "/expressvpn/start.sh"]
healthcheck.sh
if [[ ! -z $DDNS ]];
then
checkIP=$(getent hosts $DDNS | awk '{ print $1 }')
else
checkIP=$IP
fi
if [[ ! -z $checkIP ]];
then
ipinfo=$(curl -s -H "Authorization: Bearer $BEARER" 'ipinfo.io' | jq -r '.')
currentIP=$(jq -r '.ip' <<< "$ipinfo")
hostname=$(jq -r '.hostname' <<< "$ipinfo")
if [[ $checkIP = $currentIP ]];
then
if [[ ! -z $HEALTHCHECK ]];
then
curl https://hc-ping.com/$HEALTHCHECK/fail
expressvpn disconnect
expressvpn connect $SERVER
exit 1
else
expressvpn disconnect
expressvpn connect $SERVER
exit 1
fi
else
if [[ ! -z $HOSTNAME_PART && ! -z $hostname && $hostname != *"$HOSTNAME_PART"* ]];
then
#THIS IS WHERE THE CONTAINER SHOULD STOP <------------
kill 1
fi
if [[ ! -z $HEALTHCHECK ]];
then
curl https://hc-ping.com/$HEALTHCHECK
exit 0
else
exit 0
fi
fi
else
exit 0
fi
start.sh
#!/usr/bin/bash
cp /etc/resolv.conf /etc/resolv.conf.bak
umount /etc/resolv.conf
cp /etc/resolv.conf.bak /etc/resolv.conf
rm /etc/resolv.conf.bak
service expressvpn restart
expect /expressvpn/activate.sh
expressvpn connect $SERVER
touch /var/log/temp.log
tail -f /var/log/temp.log
exec "$#"
Try Changing from kill to exit 1
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 \
CMD bash /expressvpn/healthcheck.sh || exit 1
Reference from docker docs
Edit 1:
After some testing if you want to kill the container on unhealthy status you need to do it in the health check script /expressvpn/healthcheck.sh or by script on the host.
The following example the container status is healthy:
HEALTHCHECK --start-period=30s --timeout=5s --interval=10s --retries=2 CMD bash -c 'echo "0" || kill 1' || exit 1
The following example the container stops since the command ech not exit then the kill 1 is executed and the container got killed:
HEALTHCHECK --start-period=30s --timeout=5s --interval=10s --retries=2 CMD bash -c 'ech "0" || kill 1' || exit 1
Edit 2:
Well after a bit of digging i understood something that i saw in some dockerfiles:
RUN apt update -y && apt install tini -y
ENTRYPOINT ["tini", "--"]
CMD ["./echo.sh"]
From what i got docker keeps the pid 1=entrypoint process from getting killed by SIGTERM so for this you have tini small util that helps with this (still not sure what exactly the purpose of this i will keep it for next time i will be in the mood..).
Anyway after adding tini the container got killed with kill 1
Thank you for the question.
please check the output of your health check. You'll have to ensure that your healthcheck actually fails 2 consecutive times.
docker inspect --format "{{json .State.Health }}" <container name> | jq

How to user cron inside docker container

I tryed to add crontab inside docker image "jenkinsci/blueocean" but after it, jenkins does not start. Where could be the problem?
Many thanks in advance for any help.
<Dockerfile>
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
CMD ["supercronic", "/etc/crontab"]
<crontab>
# Run every minute
*/1 * * * * echo "hello world"
commands:
$docker build -t jenkins_test .
$docker run -it -p 8080:8080 --name=container_jenkins jenkins_test
If use docker inspect jenkinsci/blueocean:1.17.0 you will it's entrypoint is:
"Entrypoint": [
"/sbin/tini",
"--",
"/usr/local/bin/jenkins.sh"
],
So, when start the container it will first execute next script.
/usr/local/bin/jenkins.sh:
#! /bin/bash -e
: "${JENKINS_WAR:="/usr/share/jenkins/jenkins.war"}"
: "${JENKINS_HOME:="/var/jenkins_home"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find /usr/share/jenkins/ref/ \( -type f -o -type l \) -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +
# if `docker run` first argument start with `--` the user is passing jenkins launcher arguments
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
# read JAVA_OPTS and JENKINS_OPTS into arrays to avoid need for eval (and associated vulnerabilities)
java_opts_array=()
while IFS= read -r -d '' item; do
java_opts_array+=( "$item" )
done < <([[ $JAVA_OPTS ]] && xargs printf '%s\0' <<<"$JAVA_OPTS")
readonly agent_port_property='jenkins.model.Jenkins.slaveAgentPort'
if [ -n "${JENKINS_SLAVE_AGENT_PORT:-}" ] && [[ "${JAVA_OPTS:-}" != *"${agent_port_property}"* ]]; then
java_opts_array+=( "-D${agent_port_property}=${JENKINS_SLAVE_AGENT_PORT}" )
fi
if [[ "$DEBUG" ]] ; then
java_opts_array+=( \
'-Xdebug' \
'-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=y' \
)
fi
jenkins_opts_array=( )
while IFS= read -r -d '' item; do
jenkins_opts_array+=( "$item" )
done < <([[ $JENKINS_OPTS ]] && xargs printf '%s\0' <<<"$JENKINS_OPTS")
exec java -Duser.home="$JENKINS_HOME" "${java_opts_array[#]}" -jar ${JENKINS_WAR} "${jenkins_opts_array[#]}" "$#"
fi
# As argument is not jenkins, assume user want to run his own process, for example a `bash` shell to explore this image
exec "$#"
From above script, you can see, if you add CMD ["supercronic", "/etc/crontab"] to your own dockerfile, then when your container starts, it equals to execute next:
/usr/local/bin/jenkins.sh "supercronic" "/etc/crontab"
As if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then not match, it will directly execute the exec "$# at the last line, which results in the jenkins start code never execute.
To fix it, you had to use your own docker-entrypoint.sh to override its default entrypoint:
docker-entrypoint.sh:
#!/bin/bash
supercronic /etc/crontab &
/usr/local/bin/jenkins.sh
Dockerfile:
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/sbin/tini", "--", "/docker-entrypoint.sh"]

How do I start Jupyter notebook inside a docker container which runs airflow webserver, worker, etc?

I want to start JuPyter notebook from inside the docker container which is built from the image as follows:
FROM debian:stretch
# Never prompts the user for choices on installation/configuration of packages
ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux
# Airflow
ARG AIRFLOW_VERSION=1.10.1
ENV AIRFLOW_HOME=/usr/local/airflow
ENV EMBEDDED_DAGS_LOCATION=./dags
ENV EMBEDDED_PLUGINS_LOCATION=./plugins
ENV SLUGIFY_USES_TEXT_UNIDECODE=yes
ENV PYTHONPATH=${PYTHONPATH}:${AIRFLOW_HOME}/athena-py
# Define en_US.
ENV LANGUAGE en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LC_CTYPE en_US.UTF-8
ENV LC_MESSAGES en_US.UTF-8
ENV LC_ALL en_US.UTF-8
WORKDIR /requirements
# Only copy needed files
COPY ./requirements/airflow.txt /requirements/airflow.txt
RUN set -ex \
&& buildDeps=' \
build-essential \
libblas-dev \
libffi-dev \
libkrb5-dev \
liblapack-dev \
libpq-dev \
libxml2-dev \
libxslt1-dev \
python3-pip \
zlib1g-dev \
libcurl4-gnutls-dev \
libssh2-1-dev \
libldap2-dev \
' \
&& apt-get update -yqq \
&& apt-get install -yqq --no-install-recommends \
$buildDeps \
apt-utils \
curl \
git \
locales \
netcat \
gcc \
python3-dev \
libssl-dev \
libsasl2-dev \
openssh-server \
libsasl2-modules \
\
&& sed -i 's/^# en_US.UTF-8 UTF-8$/en_US.UTF-8 UTF-8/g' /etc/locale.gen \
&& locale-gen \
&& update-locale LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 \
&& useradd -ms /bin/bash -d ${AIRFLOW_HOME} -u 1002 airflow \
&& pip3 install --upgrade pip==9.0.3 'setuptools!=36.0.0' \
&& if [ ! -e /usr/bin/pip ]; then ln -s /usr/bin/pip3 /usr/bin/pip ; fi \
&& if [ ! -e /usr/bin/python ]; then ln -sf /usr/bin/python3 /usr/bin/python; fi \
&& pip3 install -r /requirements/airflow.txt \
&& apt-get remove --purge -yqq $buildDeps libpq-dev \
&& apt-get clean \
&& rm -rf \
/var/lib/apt/lists/* \
/tmp/* \
/var/tmp/* \
/usr/share/man \
/usr/share/doc \
/usr/share/doc-base
# Install env key
RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
# install athena HEAD
WORKDIR ${AIRFLOW_HOME}
COPY ./some-dir/requirements.txt some-dir/requirements.txt
RUN pip3 install -r some-dir/requirements.txt
COPY ./some-dir/ some-dir
COPY ./tests/ tests
COPY script/entrypoint.sh entrypoint.sh
COPY script/setup_connections.py setup_connections.py
COPY config/airflow.cfg airflow.cfg
COPY ${EMBEDDED_PLUGINS_LOCATION} plugins
COPY ${EMBEDDED_DAGS_LOCATION} dags
# Python3 Kernel for JuPyter notebooks
RUN python3 -m pip install ipykernel
RUN python3 -m ipykernel install --user
RUN mkdir -p /usr/local/airflow/.ipython/profile_default/startup/
RUN echo "import pandas as pd" > /usr/local/airflow/.ipython/profile_default/startup/athena.py
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install jupyter
RUN chown -R airflow ${AIRFLOW_HOME} \
&& chmod +x entrypoint.sh
EXPOSE 8080 5555 8793 8280 8888
USER airflow
ENTRYPOINT ["./entrypoint.sh"]
Notice the # start jupyter notebook
"jupyter-notebook", "--ip=0.0.0.0", "--allow-root"
in my entrypoint.sh.
The entrypoint.sh is:
#!/usr/bin/env bash
echo "Setting up env vars..."
eval $(envkey-source)
echo "----------------------------------------------------------------------"
CMD="airflow"
TRY_LOOP="${TRY_LOOP:-10}"
POSTGRES_HOST="${POSTGRES_HOST:-postgres}"
POSTGRES_PORT=5432
POSTGRES_CREDS="${POSTGRES_CREDS:-airflow:airflow}"
RABBITMQ_HOST="${RABBITMQ_HOST:-rabbitmq}"
RABBITMQ_CREDS="${RABBITMQ_CREDS:-airflow:airflow}"
RABBITMQ_MANAGEMENT_PORT=15672
FLOWER_URL_PREFIX="${FLOWER_URL_PREFIX:-/flower}"
AIRFLOW_URL_PREFIX="${AIRFLOW_URL_PREFIX:-/airflow}"
LOAD_DAGS_EXAMPLES="${LOAD_DAGS_EXAMPLES:-false}"
REST_API_KEY="${REST_API_KEY:-airflow_api_key}"
if [ -z $FERNET_KEY ]; then
FERNET_KEY=$(python3 -c "from cryptography.fernet import Fernet; FERNET_KEY = Fernet.generate_key().decode(); print(FERNET_KEY)")
fi
echo "Postgres host: $POSTGRES_HOST"
echo "RabbitMQ host: $RABBITMQ_HOST"
echo "Load DAG examples: $LOAD_DAGS_EXAMPLES"
echo $1
# Generate Fernet key
sed -i "s/{{ REST_API_KEY }}/${REST_API_KEY}/" $AIRFLOW_HOME/airflow.cfg
sed -i "s/{{ FERNET_KEY }}/${FERNET_KEY}/" $AIRFLOW_HOME/airflow.cfg
sed -i "s/{{ POSTGRES_HOST }}/${POSTGRES_HOST}/" $AIRFLOW_HOME/airflow.cfg
sed -i "s/{{ POSTGRES_CREDS }}/${POSTGRES_CREDS}/" $AIRFLOW_HOME/airflow.cfg
sed -i "s/{{ RABBITMQ_HOST }}/${RABBITMQ_HOST}/" $AIRFLOW_HOME/airflow.cfg
sed -i "s/{{ RABBITMQ_CREDS }}/${RABBITMQ_CREDS}/" $AIRFLOW_HOME/airflow.cfg
sed -i "s/{{ LOAD_DAGS_EXAMPLES }}/${LOAD_DAGS_EXAMPLES}/" $AIRFLOW_HOME/airflow.cfg
sed -i "s#{{ FLOWER_URL_PREFIX }}#${FLOWER_URL_PREFIX}#" $AIRFLOW_HOME/airflow.cfg
sed -i "s#{{ AIRFLOW_URL_PREFIX }}#${AIRFLOW_URL_PREFIX}#" $AIRFLOW_HOME/airflow.cfg
# wait for rabbitmq
if [ "$1" = "webserver" ] || [ "$1" = "worker" ] || [ "$1" = "scheduler" ] || [ "$1" = "flower" ] ; then
j=0
while ! curl -sI -u $RABBITMQ_CREDS http://$RABBITMQ_HOST:$RABBITMQ_MANAGEMENT_PORT/api/whoami |grep '200 OK'; do
j=`expr $j + 1`
if [ $j -ge $TRY_LOOP ]; then
echo "$(date) - $RABBITMQ_HOST still not reachable, giving up"
exit 1
fi
echo "$(date) - waiting for RabbitMQ... $j/$TRY_LOOP"
sleep 5
done
fi
# wait for postgres
if [ "$1" = "webserver" ] || [ "$1" = "worker" ] || [ "$1" = "scheduler" ] || [ "$1" = "test" ] ; then
i=0
while ! nc -z $POSTGRES_HOST $POSTGRES_PORT; do
i=`expr $i + 1`
if [ $i -ge $TRY_LOOP ]; then
echo "$(date) - ${POSTGRES_HOST}:${POSTGRES_PORT} still not reachable, giving up"
exit 1
fi
echo "$(date) - waiting for ${POSTGRES_HOST}:${POSTGRES_PORT}... $i/$TRY_LOOP"
sleep 5
done
# TODO: move to a Helm hook
# https://github.com/kubernetes/helm/blob/master/docs/charts_hooks.md
if [ "$1" = "webserver" ]; then
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
echo "Initialize database..."
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
$CMD initdb
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
echo "setting up connections..."
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
python setup_connections.py
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
fi
if [ "$1" = "test" ]; then
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
echo "Initialize database..."
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
$CMD initdb
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
echo "setting up connections..."
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
python setup_connections.py
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
echo "Running tests..."
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
python -m unittest discover
echo "----------------------------------------------------------------------"
echo "----------------------------------------------------------------------"
echo "exiting.."
exit 1
fi
fi
# start jupyter notebook
"jupyter-notebook", "--ip=0.0.0.0", "--allow-root"
$CMD "$#"
I tried reading certain articles and answers, they stated ways of how to start jupyter notebook with a port and ip, some stated writing docker-compose, all the stated methods went in vain.
I expect to use jupyter notebook as localhost:8888 just like I access airflow webserver via localhost:8080. However, right now, I face the error: Connection refused

Resources