Error while building a Pyramid app in a Docker container - docker

I´m trying to build a pyramid docker container with the code from a repository. I´m new to docker but i´ve tried this in my dockerfile
FROM alpine:3.7
RUN apk add --update\
python3 \
py-pip \
git
RUN pip3 install --upgrade pip
RUN git clone http://my_git_repo test && \
cd test && \
pip3 install -e . && \
initialize_untitled2_db development.ini && \
pserve development.ini
EXPOSE 6543`
everything the container run all commands and everything works but on the last command he fails to start the pyramid app.
then i get following error message:
Traceback (most recent call last):
File "/usr/bin/pserve", line 11, in <module>
sys.exit(main())
File "/usr/lib/python3.6/site-packages/pyramid/scripts/pserve.py", line 32, in main
return command.run()
File "/usr/lib/python3.6/site-packages/pyramid/scripts/pserve.py", line 239, in run
server(app)
File "/usr/lib/python3.6/site-packages/paste/deploy/loadwsgi.py", line 189, in server_wrapper
**context.local_conf)
File "/usr/lib/python3.6/site-packages/paste/deploy/util.py", line 55, in fix_call
val = callable(*args, **kw)
File "/usr/lib/python3.6/site-packages/waitress/__init__.py", line 20, in serve_paste
serve(app, **kw)
File "/usr/lib/python3.6/site-packages/waitress/__init__.py", line 11, in serve
server = _server(app, **kw)
File "/usr/lib/python3.6/site-packages/waitress/server.py", line 85, in create_server
sockinfo=sockinfo)
File "/usr/lib/python3.6/site-packages/waitress/server.py", line 182, in __init__
self.bind_server_socket()
File "/usr/lib/python3.6/site-packages/waitress/server.py", line 294, in bind_server_socket
self.bind(sockaddr)
File "/usr/lib/python3.6/asyncore.py", line 329, in bind
return self.socket.bind(addr)
OSError: [Errno 99] Address not available
The pyramid app works without problem outside a Container. Like i said im new into docker and i cant find the mistake.
The config file for the application runs on the localhost and with port mapping it shouldnt be a problem to run on the localhost of docker too.
Does someone know whats causing this error ?

It seemd to be a problem with "localhost" domain name in the config. I changed it to the local IP-Adress "127.0.0.1" then it worked fine.

Related

ansible.errors.AnsibleError: Unable to create local directories(/.ansible/tmp): [Errno 13] Permission denied: b'/.ansible'

I'm running Ansible in a container and getting:
ansible-playbook --version
Unhandled error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/ansible/utils/path.py", line 85, in makedirs_safe
os.makedirs(b_rpath, mode)
File "/usr/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: b'/.ansible'
and more errors including
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 62, in <module>
import ansible.constants as C
File "/usr/local/lib/python3.8/dist-packages/ansible/constants.py", line 174, in <module>
config = ConfigManager()
File "/usr/local/lib/python3.8/dist-packages/ansible/config/manager.py", line 291, in __init__
self.update_config_data()
File "/usr/local/lib/python3.8/dist-packages/ansible/config/manager.py", line 571, in update_config_data
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
ansible.errors.AnsibleError: Invalid settings supplied for DEFAULT_LOCAL_TMP: Unable to create local directories(/.ansible/tmp): [Errno 13] Permission denied: b'/.ansible'
This is the Dockerfile I'm using:
FROM ubuntu
ENV ANSIBLE_VERSION 2.9.9
# Install Ansible.
RUN apt-get update && apt-get install -y curl unzip ca-certificates python3 python3-pip \
&& pip3 install ansible==${ANSIBLE_VERSION} \
&& apt-get clean all
# Define default command.
CMD ["/usr/bin/bash"]
This works locally. But it does not inside a docker container in EKS.
Any idea what's wrong?
I was having the same problem. I am running Jenkins in a docker container. I tried three different GitHub ansible images. None of that mattered. What worked was changing this ...
stage('Execute AD Hoc Ansible.') {
steps {
script {
sh """
ansible ${PATTERN} -i ${INVENTORY} -l "${LIMIT}" -m ${MODULE} -a ${DASH_A} ${EXTRA_PARAMS}
"""
}
}
}
... to this ...
stage('Execute AD Hoc Ansible.') {
steps {
script {
env.DEFAULT_LOCAL_TMP = env.WORKSPACE_TMP
env.HOME = env.WORKSPACE
sh """
ansible ${PATTERN} -i ${INVENTORY} -l "${LIMIT}" -m ${MODULE} -a ${DASH_A} ${EXTRA_PARAMS}
"""
}
}
}
Note I had to set env vars with these lines:
env.DEFAULT_LOCAL_TMP = env.WORKSPACE_TMP
env.HOME = env.WORKSPACE
Following this thread, I have solved it successfully.
https://stackoverflow.com/a/35180089/17758190
I have edited ansible.cfg in your ansible
remote_tmp = /tmp/.ansible/tmp

How to get NVIDIA driver using a deeplearning-platform-release VM image?

I am running into the issue where I need to have NVIDIA driver installed.
I initially created a compute engine VM based on this:
export IMAGE_FAMILY="pytorch-latest-cu100"
export ZONE="us-west1-b"
export INSTANCE_NAME="my-instance"
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-v100,count=1" \
--metadata="install-nvidia-driver=True"`
My code that's deployed on this VM works fine. Now I need to create a REST API layer over it, so according to this, I need to containerize the application using docker.
I tried building my docker image from:
gcr.io/deeplearning-platform-release/pytorch-latest-cu100
(from the above command) but it seems this image doesn't exist.
then I tried building another image from:
gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
but now when I run my code, I get the following error:
Traceback (most recent call last):
File "model.py", line 297, in run
data = main(filepath)
File "model.py", line 52, in main
model = model.cuda()
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 260, in cuda
return self._apply(lambda t: t.cuda(device))
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 193, in _apply
param.data = fn(param.data)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 260, in <lambda>
return self._apply(lambda t: t.cuda(device))
File "/root/miniconda3/lib/python3.7/site-
packages/torch/cuda/__init__.py", line 161, in _lazy_init
_check_driver()
File "/root/miniconda3/lib/python3.7/site-
packages/torch/cuda/__init__.py", line 82, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
My Dockerfile:
FROM gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
WORKDIR /app
COPY requirements.txt /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8080
COPY . /app/
CMD [ "python","main.py" ]
My main.py:
from flask import Flask, request
import model
app = Flask(__name__)
#app.route('/getduration', methods=['POST'])
def get_duration():
try:
data = request.args.get('param')
except:
data = None
try:
duration = model.run(data)
return duration, 200
except Exception as e:
error = f"There was an error: {e}"
return error, 500
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
How can I update my Dockerfile such that I can use Nvidia driver?
Are you using NVIDIA Docker? If not, that could be your problem. Use nvidia-docker exactly as you would docker, and it will make the NVIDIA drivers available inside your container.
https://github.com/NVIDIA/nvidia-docker

Submit a spark job from Airflow to external spark container

I have a spark and airflow cluster which is built with docker swarm. Airflow container cannot contain spark-submit as I expect.
I am using following images which exist in github
Spark: big-data-europe/docker-hadoop-spark-workbench
Airflow: puckel/docker-airflow (CeleryExecutor)
I prepared a .py file and add it under dags folder.
from airflow import DAG
from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator
from datetime import datetime, timedelta
args = {'owner': 'airflow', 'start_date': datetime(2018, 9, 24) }
dag = DAG('spark_example_new', default_args=args, schedule_interval="#once")
operator = SparkSubmitOperator(task_id='spark_submit_job', conn_id='spark_default', java_class='Main', application='/SimpleSpark.jar', name='airflow-spark-example',conf={'master':'spark://master:7077'},
dag=dag)
I also configure the connection as folows in web site:
Master is the hostname of spark master container.
But it does not find the spark-submit, it produces following error:
[2018-09-24 08:48:14,063] {{logging_mixin.py:95}} INFO - [2018-09-24 08:48:14,062] {{spark_submit_hook.py:283}} INFO - Spark-Submit cmd: ['spark-submit', '--master', 'spark://master:7077', '--conf', 'master=spark://master:7077', '--name', 'airflow-spark-example', '--class', 'Main', '--queue', 'root.default', '/SimpleSpark.jar']
[2018-09-24 08:48:14,067] {{models.py:1736}} ERROR - [Errno 2] No such file or directory: 'spark-submit': 'spark-submit'
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1633, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/spark_submit_operator.py", line 168, in execute
self._hook.submit(self._application)
File "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/spark_submit_hook.py", line 330, in submit
**kwargs)
File "/usr/local/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/usr/local/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'spark-submit': 'spark-submit'
As far as I know puckel/docker-airflow uses Python slim image(https://hub.docker.com/_/python/). This image does not contain the common packages and only contains the minimal packages needed to run python. Hence, you will need to extend the image and install spark-submit on your container.
Edit: Airflow does need spark binaries in the container to run SparkSubmitOperator as documented here.
The other approach you can use is to use SSHOperator to run spark-submit command on an external VM by SSHing into a remote machine. But here as well SSH should be available which isn't available in Puckel Airflow.
This is late answer
you should install apache-airflow-providers-apache-spark
So you should create a file called 'requirements.txt'
add apache-airflow-providers-apache-spark in the file requirements.txt
Create a Dockerfile like this
FROM apache/airflow:2.2.3
# Install OpenJDK-11
RUN apt update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y ant && \
apt-get clean;
# Set JAVA_HOME
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64/
RUN export JAVA_HOME
USER airflow
COPY requirements.txt .
RUN pip install -r requirements.txt
in the docker-compose.yml comment the line :
# image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.3}
and uncomment the line build .
Finally run
docker-compose build
docker-compose up

ConnectionError when running Redis server inside Docker

I have the following Python script, example.py:
import redis
r = redis.Redis()
r.set('a','b')
and the following Dockerfile:
FROM ubuntu:18.04
# Install system-wide dependencies
RUN apt-get -yqq update
RUN apt-get -yqq install python3-dev python3-pip
RUN apt-get -yqq install redis-tools redis-server
RUN pip3 install redis
# Copy application code
ADD . /usr/local/example
WORKDIR /usr/local/example
# Start application
CMD /etc/init.d/redis-server restart \
&& python3 example.py
After I build the container (docker build -t redis-example .) and run it (docker run -P -it -d redis-example), the following stack trace is printed:
Stopping redis-server: redis-server.
Starting redis-server: redis-server.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 484, in connect
sock = self._connect()
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 541, in _connect
raise err
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 529, in _connect
sock.connect(socket_address)
OSError: [Errno 99] Cannot assign requested address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 667, in execute_command
connection.send_command(*args)
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 610, in send_command
self.send_packed_command(self.pack_command(*args))
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 585, in send_packed_command
self.connect()
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I'm not sure why this very simple example is failing in Docker. I'd like to run an instance of a Redis server local to the Docker container.
It does not need to be reachable outside of the container and should be unique to the container.
How can I resolve this exception?

Running Raku notebook in Docker

Running Raku (previously aka Perl 6) kernel in Jupyter notebook would be great for reproducibility and ease of use (personal view).
I wanted to run the Perl 6 notebook in a docker container and access it in my web browser. For this I created this docker image.
The code to create the docker image was:
FROM sumankhanal/rakudo:2019.07.1
RUN apt-get update \
&& apt-get install -y ca-certificates python3-pip && pip3 install jupyter notebook \
&& zef install Jupyter::Kernel --force-test
ENV TINI_VERSION v0.6.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini
RUN chmod +x /usr/bin/tini
ENTRYPOINT ["/usr/bin/tini", "--"]
EXPOSE 8888
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0"]
I am on Windows 10 64 bit and IP address of my docker is 192.168.99.100.
When I tried to run the container with this code:
docker run -it -p 8888:8888 sumankhanal/raku-notebook in the docker terminal
I get this error:
$ docker run -it -p 8888:8888 sumankhanal/raku-notebook
[I 14:26:43.598 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/jupyter_core/application.py", line 267, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1296, in initialize
self.init_webapp()
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1120, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/local/lib/python3.5/dist-packages/tornado/tcpserver.py", line 142, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python3.5/dist-packages/tornado/netutil.py", line 197, in bind_sockets
sock.bind(sockaddr)
OSError: [Errno 99] Cannot assign requested address
Any help ?
You need to add --allow-root in the CMD of your Dockerfile. Also you need to link the kernel with jupyter in your dockerfile
jupyter-kernel.p6 --generate-config
Once you do that you will be able to see the dockerfile. I also noticed that your images size is very huge, you should try and find a better base image for jupyter rather than the one you have.
For more details about installing kernels refer to the below link
https://jupyter.readthedocs.io/en/latest/install-kernel.html

Resources