Conda build inside docker container fails - docker

I am trying to integrate my anaconda package build system with Codeship CI/CD service, therefore I need my build process to happen inside a Docker container.
Currently my Dockerfile looks like so:
FROM continuumio/miniconda3
COPY . .
RUN conda create --yes --name build-env python=3.8 \
&& conda install -n build-env conda-build -y \
&& conda run -n build-env conda-build --channel haasad .
RUN conda create --yes --name testing-env python=3.8 \
&& conda install -n testing-env --use-local sten -c haasad \
&& conda install -n testing-env -c anaconda pytest
When the build runs, the following error occurs:
/opt/conda/envs/build-env/conda-bld/chardet_1591782226225/work/conda_build.sh: line 4: /tmp/build/80754af9/chardet_1573033772973/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeh/bin/python: No such file or directory
Traceback (most recent call last):
File "/opt/conda/envs/build-env/bin/conda-build", line 11, in <module>
sys.exit(main())
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 469, in main
execute(sys.argv[1:])
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 458, in execute
outputs = api.build(args.recipe, post=args.post, build_only=args.build_only,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/api.py", line 208, in build
return build_tree(absolute_recipes, config, stats, build_only=build_only, post=post,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/build.py", line 2339, in build_tree
packages_from_this = build(metadata, stats,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/build.py", line 1491, in build
utils.check_call_env(cmd, env=env, rewrite_stdout_env=rewrite_env,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/utils.py", line 398, in check_call_env
return _func_defaulting_env_to_os_environ('call', *popenargs, **kwargs)
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/utils.py", line 378, in _func_defaulting_env_to_os_environ
raise subprocess.CalledProcessError(proc.returncode, _args)
subprocess.CalledProcessError: Command '['/bin/bash', '-o', 'errexit', '/opt/conda/envs/build-env/conda-bld/chardet_1591782226225/work/conda_build.sh']' returned non-zero exit status 127.
The repo
How to fix this and what am I doing wrong?

Related

How to make installed apps executable by lambda user when using docker

I'm using a docker image on AWS lambda. In my dockerfile, I've installed an executable (the pulumi cli tool) and confirmed successful installation by running RUN pulumi -version.
When I try to invoke this executable through my lambda, I get permission denied errors from python Popen:
2023-01-25T14:41:45 ws = pulumi.automation.LocalWorkspace(work_dir="/tmp/", pulumi_home="/tmp/.pulumi")
2023-01-25T14:41:45 File "/var/lang/lib/python3.9/site-packages/pulumi/automation/_local_workspace.py", line 125, in __init__
2023-01-25T14:41:45 pulumi_version = self._get_pulumi_version()
2023-01-25T14:41:45 File "/var/lang/lib/python3.9/site-packages/pulumi/automation/_local_workspace.py", line 411, in _get_pulumi_version
2023-01-25T14:41:45 result = self._run_pulumi_cmd_sync(["version"])
2023-01-25T14:41:45 File "/var/lang/lib/python3.9/site-packages/pulumi/automation/_local_workspace.py", line 430, in _run_pulumi_cmd_sync
2023-01-25T14:41:45 return _run_pulumi_cmd(args, self.work_dir, envs, on_output)
2023-01-25T14:41:45 File "/var/lang/lib/python3.9/site-packages/pulumi/automation/_cmd.py", line 55, in _run_pulumi_cmd
2023-01-25T14:41:45 with subprocess.Popen(
2023-01-25T14:41:45 File "/var/lang/lib/python3.9/site-packages/sentry_sdk/integrations/stdlib.py", line 193, in sentry_patched_popen_init
2023-01-25T14:41:45 rv = old_popen_init(self, *a, **kw) # type: ignore
2023-01-25T14:41:45 File "/var/lang/lib/python3.9/subprocess.py", line 951, in __init__
2023-01-25T14:41:45 self._execute_child(args, executable, preexec_fn, close_fds,
2023-01-25T14:41:45 File "/var/lang/lib/python3.9/subprocess.py", line 1821, in _execute_child
2023-01-25T14:41:45 raise child_exception_type(errno_num, err_msg, err_filename)
2023-01-25T14:41:45 PermissionError: [Errno 13] Permission denied: 'pulumi'[INFO]
I think this is a generic lambda permissions issue, but some extra context in case it is helpful: I'm using the pulumi-python library, which invokes this pulumi cli app via subprocess.
How can I ensure that things I install in my Dockerfile are executable by the lambda user?
Directions I tried:
chmod -R a+wrx /root/.pulumi - this command runs ok but I still get the permission error when trying to invoke the executable
I notice that my lambda user is sbx_user1051, so I tried to chown -R sbx_user1051 /root/.pulumi - this fails, saying there is no such user. That makes it seem like the lambda user is created after I deploy my docker image
In the end I resolved this by moving the installation out of the /root folder into the folder lambda uses as the workspace for task execution:
FROM public.ecr.aws/lambda/python:3.9-x86_64
RUN yum install -y \
tar \
gzip \
ca-certificates \
curl \
which \
&& yum clean all
# install pulumi - this will go to the current user's home: /root
RUN curl -fsSL https://get.pulumi.com | sh
# move pulumi to the lambda function's task root folder
RUN mv /root/.pulumi/ ${LAMBDA_TASK_ROOT}
# grant appropriate permissions
RUN chmod -R a+wrx ${LAMBDA_TASK_ROOT}/.pulumi
RUN chmod a+x ${LAMBDA_TASK_ROOT}/.pulumi/bin/pulumi
# add to path
ENV PATH="${PATH}:${LAMBDA_TASK_ROOT}/.pulumi/bin"
...

Submit a spark job from Airflow to external spark container

I have a spark and airflow cluster which is built with docker swarm. Airflow container cannot contain spark-submit as I expect.
I am using following images which exist in github
Spark: big-data-europe/docker-hadoop-spark-workbench
Airflow: puckel/docker-airflow (CeleryExecutor)
I prepared a .py file and add it under dags folder.
from airflow import DAG
from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator
from datetime import datetime, timedelta
args = {'owner': 'airflow', 'start_date': datetime(2018, 9, 24) }
dag = DAG('spark_example_new', default_args=args, schedule_interval="#once")
operator = SparkSubmitOperator(task_id='spark_submit_job', conn_id='spark_default', java_class='Main', application='/SimpleSpark.jar', name='airflow-spark-example',conf={'master':'spark://master:7077'},
dag=dag)
I also configure the connection as folows in web site:
Master is the hostname of spark master container.
But it does not find the spark-submit, it produces following error:
[2018-09-24 08:48:14,063] {{logging_mixin.py:95}} INFO - [2018-09-24 08:48:14,062] {{spark_submit_hook.py:283}} INFO - Spark-Submit cmd: ['spark-submit', '--master', 'spark://master:7077', '--conf', 'master=spark://master:7077', '--name', 'airflow-spark-example', '--class', 'Main', '--queue', 'root.default', '/SimpleSpark.jar']
[2018-09-24 08:48:14,067] {{models.py:1736}} ERROR - [Errno 2] No such file or directory: 'spark-submit': 'spark-submit'
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1633, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/spark_submit_operator.py", line 168, in execute
self._hook.submit(self._application)
File "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/spark_submit_hook.py", line 330, in submit
**kwargs)
File "/usr/local/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/usr/local/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'spark-submit': 'spark-submit'
As far as I know puckel/docker-airflow uses Python slim image(https://hub.docker.com/_/python/). This image does not contain the common packages and only contains the minimal packages needed to run python. Hence, you will need to extend the image and install spark-submit on your container.
Edit: Airflow does need spark binaries in the container to run SparkSubmitOperator as documented here.
The other approach you can use is to use SSHOperator to run spark-submit command on an external VM by SSHing into a remote machine. But here as well SSH should be available which isn't available in Puckel Airflow.
This is late answer
you should install apache-airflow-providers-apache-spark
So you should create a file called 'requirements.txt'
add apache-airflow-providers-apache-spark in the file requirements.txt
Create a Dockerfile like this
FROM apache/airflow:2.2.3
# Install OpenJDK-11
RUN apt update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y ant && \
apt-get clean;
# Set JAVA_HOME
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64/
RUN export JAVA_HOME
USER airflow
COPY requirements.txt .
RUN pip install -r requirements.txt
in the docker-compose.yml comment the line :
# image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.3}
and uncomment the line build .
Finally run
docker-compose build
docker-compose up

ConnectionError when running Redis server inside Docker

I have the following Python script, example.py:
import redis
r = redis.Redis()
r.set('a','b')
and the following Dockerfile:
FROM ubuntu:18.04
# Install system-wide dependencies
RUN apt-get -yqq update
RUN apt-get -yqq install python3-dev python3-pip
RUN apt-get -yqq install redis-tools redis-server
RUN pip3 install redis
# Copy application code
ADD . /usr/local/example
WORKDIR /usr/local/example
# Start application
CMD /etc/init.d/redis-server restart \
&& python3 example.py
After I build the container (docker build -t redis-example .) and run it (docker run -P -it -d redis-example), the following stack trace is printed:
Stopping redis-server: redis-server.
Starting redis-server: redis-server.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 484, in connect
sock = self._connect()
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 541, in _connect
raise err
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 529, in _connect
sock.connect(socket_address)
OSError: [Errno 99] Cannot assign requested address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 667, in execute_command
connection.send_command(*args)
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 610, in send_command
self.send_packed_command(self.pack_command(*args))
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 585, in send_packed_command
self.connect()
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I'm not sure why this very simple example is failing in Docker. I'd like to run an instance of a Redis server local to the Docker container.
It does not need to be reachable outside of the container and should be unique to the container.
How can I resolve this exception?

Run ckan using Dockerfile from Docker Hub

Trying to install ckan from Dockerfile in:
Docker Community Edition
Version 17.06.2-ce-mac27 (19124)
Channel: stable
428bd6ceae*
FIRST ATTEMPT
These are the steps followed:
$ docker pull ckan/solr
$ docker pull ckan/ckan
$ docker pull ckan/postgresql
After downloaded the images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ckan/solr latest 4acd7db7b1f7 3 days ago 517MB
ckan/ckan latest 77dd30c92740 3 days ago 642MB
ckan/postgresql latest 3c3ecd94ae7e 3 days ago 265MB
For each Dockerfile I made:
For solr
$ docker build . (for SOLR)
Step 7/9
ADD failed: stat /var/lib/docker/tmp/docker-builder234804113/solrconfig.xml: no such file or directory
For postgresql
$ docker build . (for POSTGRESQL)
Successfully built 8b296a6e3153
For ckan
$ docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
Error in Step 7/26 :
Get:1 http://security.debian.org/ jessie/updates/main perl-base amd64 5.20.2-3+deb8u9 [1226 kB]
debconf: delaying package configuration, since apt-utils is not installed
And Fail in:
Step 12/26 : ADD ./requirements.txt $CKAN_HOME/src/ckan/requirements.txt
ADD failed: stat /var/lib/docker/tmp/docker-builder814560705/requirements.txt: no such file or directory
SECOND ATTEMPT
Removed images
Pull again images
$ docker pull ckan/solr
$ docker pull ckan/ckan
$ docker pull ckan/postgresql
And Launch:
$ docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
for the Dockerfile in: https://hub.docker.com/r/ckan/ckan/~/dockerfile/
and I get the same error in the same step:
Step 12/26 : ADD ./requirements.txt $CKAN_HOME/src/ckan/requirements.txt
ADD failed: stat /var/lib/docker/tmp/docker-builder254648764/requirements.txt: no such file or directory
but the log in Docker Hub is correct.
Am I doing something wrong?????
Dockerfile
FROM debian:jessie
MAINTAINER Open Knowledge
ENV CKAN_HOME /usr/lib/ckan/default
ENV CKAN_CONFIG /etc/ckan/default
ENV CKAN_STORAGE_PATH /var/lib/ckan
ENV CKAN_SITE_URL http://localhost:5000
# Install required packages
RUN apt-get -q -y update && apt-get -q -y upgrade && DEBIAN_FRONTEND=noninteractive apt-get -q -y install \
python-dev \
python-pip \
python-virtualenv \
libpq-dev \
git-core \
build-essential \
libssl-dev \
libffi-dev \
&& apt-get -q clean
# SetUp Virtual Environment CKAN
RUN mkdir -p $CKAN_HOME $CKAN_CONFIG $CKAN_STORAGE_PATH
RUN virtualenv $CKAN_HOME
RUN ln -s $CKAN_HOME/bin/pip /usr/local/bin/ckan-pip
RUN ln -s $CKAN_HOME/bin/paster /usr/local/bin/ckan-paster
# SetUp Requirements
ADD ./requirements.txt $CKAN_HOME/src/ckan/requirements.txt
RUN ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/requirements.txt
# TMP-BUGFIX https://github.com/ckan/ckan/issues/3388
ADD ./dev-requirements.txt $CKAN_HOME/src/ckan/dev-requirements.txt
RUN ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/dev-requirements.txt
# TMP-BUGFIX https://github.com/ckan/ckan/issues/3594
RUN ckan-pip install --upgrade urllib3
# SetUp CKAN
ADD . $CKAN_HOME/src/ckan/
RUN ckan-pip install -e $CKAN_HOME/src/ckan/
RUN ln -s $CKAN_HOME/src/ckan/ckan/config/who.ini $CKAN_CONFIG/who.ini
# SetUp EntryPoint
COPY ./contrib/docker/ckan-entrypoint.sh /
RUN chmod +x /ckan-entrypoint.sh
ENTRYPOINT ["/ckan-entrypoint.sh"]
# Volumes
VOLUME ["/etc/ckan/default"]
VOLUME ["/var/lib/ckan"]
EXPOSE 5000
CMD ["ckan-paster","serve","/etc/ckan/default/ckan.ini"]
UPDATE 1
As suggested, added the content of the ckan package from:
http://packaging.ckan.org/
Copied to my Dockerfile directory /data/usr/lib/ckan/default/src/ckan/*
New error in step 15 (see comments)
UPDATE 2
Cloned ckan from: https://github.com/ckan/ckan
$ docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
Result:
Step 26/26 : CMD ckan-paster serve /etc/ckan/default/ckan.ini
---> Running in eeeb6ace6ee1
---> 3cd87cf4a1af
Removing intermediate container eeeb6ace6ee1
Successfully built 3cd87cf4a1af
Successfully tagged ckan:latest
docker: Error response from daemon: could not get container for db: No such container: db.
Trying to access to ckan:
docker run -it ckan bash
I get this error:
Distribution already installed:
ckan 2.8.0a0 from /usr/lib/ckan/default/src/ckan
Creating /etc/ckan/default/ckan.ini
Now you should edit the config files
/etc/ckan/default/ckan.ini
Traceback (most recent call last):
File "/usr/local/bin/ckan-paster", line 11, in <module>
sys.exit(run())
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 102, in run
invoke(command, command_name, options, args[1:])
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 141, in invoke
exit_code = runner.run(args)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/paste/script/command.py", line 236, in run
result = self.command()
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 348, in command
self._load_config(cmd!='upgrade')
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 321, in _load_config
self.site_user = load_config(self.options.config, load_site_user)
File "/usr/lib/ckan/default/src/ckan/ckan/lib/cli.py", line 230, in load_config
load_environment(conf.global_conf, conf.local_conf)
File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 111, in load_environment
p.load_all()
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 129, in load_all
unload_all()
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 182, in unload_all
unload(*reversed(_PLUGINS))
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 210, in unload
plugins_update()
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 121, in plugins_update
environment.update_config()
File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 289, in update_config
engine = sqlalchemy.engine_from_config(config, client_encoding='utf8')
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 428, in engine_from_config
return create_engine(url, **options)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 387, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 50, in create
u = url.make_url(name_or_url)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 194, in make_url
return _parse_rfc1738_args(name_or_url)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 240, in _parse_rfc1738_args
return URL(name, **components)
File "/usr/lib/ckan/default/local/lib/python2.7/site-packages/sqlalchemy/engine/url.py", line 60, in __init__
self.port = int(port)
ValueError: invalid literal for int() with base 10: ''
UPDATE 3 (matt fullerton suggestion)
I have cloned https://github.com/parksandwildlife/ckan.git in a folder called ckan
I have cloned https://github.com/parksandwildlife/ckan/tree/3649-docker-upgrade in another folder called ckan-3649
In ckan folder
docker build . -t ckan && docker run -d -p 80:5000 --link db:db --link redis:redis --link solr:solr ckan
In Step 15/26 :
RUN ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/dev-requirements.txt
ERROR:
Cleaning up...
Command /usr/lib/ckan/default/bin/python2 -c "import setuptools, tokenize;__file__='/tmp/pip-build-BLM6DJ/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Vynm83-record/install-record.txt --single-version-externally-managed --compile --install-headers /usr/lib/ckan/default/include/site/python2.7 failed with error code 1 in /tmp/pip-build-BLM6DJ/cryptography
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c ckan-pip install --upgrade -r $CKAN_HOME/src/ckan/dev-requirements.txt' returned a non-zero code: 1
In ckan-3649/contrib/docker "docker-compose up"
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b6d7a5a595ca redis:latest "docker-entrypoint..." 9 minutes ago Up 9 minutes 6379/tcp redis
4551ccea28f4 ckan/solr:latest "docker-entrypoint..." 9 minutes ago Up 9 minutes 8983/tcp solr
86cea84b6ab5 docker_db "docker-entrypoint..." 9 minutes ago Up 9 minutes 5432/tcp db
fca64f1bee5a clementmouchet/datapusher "python datapusher..." 9 minutes ago Up 9 minutes 0.0.0.0:8800->8800/tcp datapusher
But I think ckan image is missing....
There is a major upgrade of the docker functionality in CKAN underway. I would suggest cloning CKAN from here:
https://github.com/parksandwildlife/ckan
git clone https://github.com/parksandwildlife/ckan.git
Then checkout the origin/3649-docker-upgrade branch
(https://github.com/parksandwildlife/ckan/tree/3649-docker-upgrade)
git checkout origin/3649-docker-upgrade
Then use docker-compose in the contrib/docker folder:
docker-compose up
This should assemble Solr, Postgres etc. for you.
Comments on mileage with this at https://github.com/ckan/ckan/pull/3692 will also be appreciated.

Running Raku notebook in Docker

Running Raku (previously aka Perl 6) kernel in Jupyter notebook would be great for reproducibility and ease of use (personal view).
I wanted to run the Perl 6 notebook in a docker container and access it in my web browser. For this I created this docker image.
The code to create the docker image was:
FROM sumankhanal/rakudo:2019.07.1
RUN apt-get update \
&& apt-get install -y ca-certificates python3-pip && pip3 install jupyter notebook \
&& zef install Jupyter::Kernel --force-test
ENV TINI_VERSION v0.6.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini
RUN chmod +x /usr/bin/tini
ENTRYPOINT ["/usr/bin/tini", "--"]
EXPOSE 8888
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0"]
I am on Windows 10 64 bit and IP address of my docker is 192.168.99.100.
When I tried to run the container with this code:
docker run -it -p 8888:8888 sumankhanal/raku-notebook in the docker terminal
I get this error:
$ docker run -it -p 8888:8888 sumankhanal/raku-notebook
[I 14:26:43.598 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/jupyter_core/application.py", line 267, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1296, in initialize
self.init_webapp()
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1120, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/local/lib/python3.5/dist-packages/tornado/tcpserver.py", line 142, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python3.5/dist-packages/tornado/netutil.py", line 197, in bind_sockets
sock.bind(sockaddr)
OSError: [Errno 99] Cannot assign requested address
Any help ?
You need to add --allow-root in the CMD of your Dockerfile. Also you need to link the kernel with jupyter in your dockerfile
jupyter-kernel.p6 --generate-config
Once you do that you will be able to see the dockerfile. I also noticed that your images size is very huge, you should try and find a better base image for jupyter rather than the one you have.
For more details about installing kernels refer to the below link
https://jupyter.readthedocs.io/en/latest/install-kernel.html

Resources