I have the following Python script, example.py:
import redis
r = redis.Redis()
r.set('a','b')
and the following Dockerfile:
FROM ubuntu:18.04
# Install system-wide dependencies
RUN apt-get -yqq update
RUN apt-get -yqq install python3-dev python3-pip
RUN apt-get -yqq install redis-tools redis-server
RUN pip3 install redis
# Copy application code
ADD . /usr/local/example
WORKDIR /usr/local/example
# Start application
CMD /etc/init.d/redis-server restart \
&& python3 example.py
After I build the container (docker build -t redis-example .) and run it (docker run -P -it -d redis-example), the following stack trace is printed:
Stopping redis-server: redis-server.
Starting redis-server: redis-server.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 484, in connect
sock = self._connect()
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 541, in _connect
raise err
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 529, in _connect
sock.connect(socket_address)
OSError: [Errno 99] Cannot assign requested address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 667, in execute_command
connection.send_command(*args)
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 610, in send_command
self.send_packed_command(self.pack_command(*args))
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 585, in send_packed_command
self.connect()
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 489, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I'm not sure why this very simple example is failing in Docker. I'd like to run an instance of a Redis server local to the Docker container.
It does not need to be reachable outside of the container and should be unique to the container.
How can I resolve this exception?
Related
I'm running Ansible in a container and getting:
ansible-playbook --version
Unhandled error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/ansible/utils/path.py", line 85, in makedirs_safe
os.makedirs(b_rpath, mode)
File "/usr/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/usr/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: b'/.ansible'
and more errors including
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/ansible-playbook", line 62, in <module>
import ansible.constants as C
File "/usr/local/lib/python3.8/dist-packages/ansible/constants.py", line 174, in <module>
config = ConfigManager()
File "/usr/local/lib/python3.8/dist-packages/ansible/config/manager.py", line 291, in __init__
self.update_config_data()
File "/usr/local/lib/python3.8/dist-packages/ansible/config/manager.py", line 571, in update_config_data
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
ansible.errors.AnsibleError: Invalid settings supplied for DEFAULT_LOCAL_TMP: Unable to create local directories(/.ansible/tmp): [Errno 13] Permission denied: b'/.ansible'
This is the Dockerfile I'm using:
FROM ubuntu
ENV ANSIBLE_VERSION 2.9.9
# Install Ansible.
RUN apt-get update && apt-get install -y curl unzip ca-certificates python3 python3-pip \
&& pip3 install ansible==${ANSIBLE_VERSION} \
&& apt-get clean all
# Define default command.
CMD ["/usr/bin/bash"]
This works locally. But it does not inside a docker container in EKS.
Any idea what's wrong?
I was having the same problem. I am running Jenkins in a docker container. I tried three different GitHub ansible images. None of that mattered. What worked was changing this ...
stage('Execute AD Hoc Ansible.') {
steps {
script {
sh """
ansible ${PATTERN} -i ${INVENTORY} -l "${LIMIT}" -m ${MODULE} -a ${DASH_A} ${EXTRA_PARAMS}
"""
}
}
}
... to this ...
stage('Execute AD Hoc Ansible.') {
steps {
script {
env.DEFAULT_LOCAL_TMP = env.WORKSPACE_TMP
env.HOME = env.WORKSPACE
sh """
ansible ${PATTERN} -i ${INVENTORY} -l "${LIMIT}" -m ${MODULE} -a ${DASH_A} ${EXTRA_PARAMS}
"""
}
}
}
Note I had to set env vars with these lines:
env.DEFAULT_LOCAL_TMP = env.WORKSPACE_TMP
env.HOME = env.WORKSPACE
Following this thread, I have solved it successfully.
https://stackoverflow.com/a/35180089/17758190
I have edited ansible.cfg in your ansible
remote_tmp = /tmp/.ansible/tmp
I am trying to integrate my anaconda package build system with Codeship CI/CD service, therefore I need my build process to happen inside a Docker container.
Currently my Dockerfile looks like so:
FROM continuumio/miniconda3
COPY . .
RUN conda create --yes --name build-env python=3.8 \
&& conda install -n build-env conda-build -y \
&& conda run -n build-env conda-build --channel haasad .
RUN conda create --yes --name testing-env python=3.8 \
&& conda install -n testing-env --use-local sten -c haasad \
&& conda install -n testing-env -c anaconda pytest
When the build runs, the following error occurs:
/opt/conda/envs/build-env/conda-bld/chardet_1591782226225/work/conda_build.sh: line 4: /tmp/build/80754af9/chardet_1573033772973/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeh/bin/python: No such file or directory
Traceback (most recent call last):
File "/opt/conda/envs/build-env/bin/conda-build", line 11, in <module>
sys.exit(main())
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 469, in main
execute(sys.argv[1:])
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/cli/main_build.py", line 458, in execute
outputs = api.build(args.recipe, post=args.post, build_only=args.build_only,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/api.py", line 208, in build
return build_tree(absolute_recipes, config, stats, build_only=build_only, post=post,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/build.py", line 2339, in build_tree
packages_from_this = build(metadata, stats,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/build.py", line 1491, in build
utils.check_call_env(cmd, env=env, rewrite_stdout_env=rewrite_env,
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/utils.py", line 398, in check_call_env
return _func_defaulting_env_to_os_environ('call', *popenargs, **kwargs)
File "/opt/conda/envs/build-env/lib/python3.8/site-packages/conda_build/utils.py", line 378, in _func_defaulting_env_to_os_environ
raise subprocess.CalledProcessError(proc.returncode, _args)
subprocess.CalledProcessError: Command '['/bin/bash', '-o', 'errexit', '/opt/conda/envs/build-env/conda-bld/chardet_1591782226225/work/conda_build.sh']' returned non-zero exit status 127.
The repo
How to fix this and what am I doing wrong?
I have a spark and airflow cluster which is built with docker swarm. Airflow container cannot contain spark-submit as I expect.
I am using following images which exist in github
Spark: big-data-europe/docker-hadoop-spark-workbench
Airflow: puckel/docker-airflow (CeleryExecutor)
I prepared a .py file and add it under dags folder.
from airflow import DAG
from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator
from datetime import datetime, timedelta
args = {'owner': 'airflow', 'start_date': datetime(2018, 9, 24) }
dag = DAG('spark_example_new', default_args=args, schedule_interval="#once")
operator = SparkSubmitOperator(task_id='spark_submit_job', conn_id='spark_default', java_class='Main', application='/SimpleSpark.jar', name='airflow-spark-example',conf={'master':'spark://master:7077'},
dag=dag)
I also configure the connection as folows in web site:
Master is the hostname of spark master container.
But it does not find the spark-submit, it produces following error:
[2018-09-24 08:48:14,063] {{logging_mixin.py:95}} INFO - [2018-09-24 08:48:14,062] {{spark_submit_hook.py:283}} INFO - Spark-Submit cmd: ['spark-submit', '--master', 'spark://master:7077', '--conf', 'master=spark://master:7077', '--name', 'airflow-spark-example', '--class', 'Main', '--queue', 'root.default', '/SimpleSpark.jar']
[2018-09-24 08:48:14,067] {{models.py:1736}} ERROR - [Errno 2] No such file or directory: 'spark-submit': 'spark-submit'
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1633, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/spark_submit_operator.py", line 168, in execute
self._hook.submit(self._application)
File "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/spark_submit_hook.py", line 330, in submit
**kwargs)
File "/usr/local/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/usr/local/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'spark-submit': 'spark-submit'
As far as I know puckel/docker-airflow uses Python slim image(https://hub.docker.com/_/python/). This image does not contain the common packages and only contains the minimal packages needed to run python. Hence, you will need to extend the image and install spark-submit on your container.
Edit: Airflow does need spark binaries in the container to run SparkSubmitOperator as documented here.
The other approach you can use is to use SSHOperator to run spark-submit command on an external VM by SSHing into a remote machine. But here as well SSH should be available which isn't available in Puckel Airflow.
This is late answer
you should install apache-airflow-providers-apache-spark
So you should create a file called 'requirements.txt'
add apache-airflow-providers-apache-spark in the file requirements.txt
Create a Dockerfile like this
FROM apache/airflow:2.2.3
# Install OpenJDK-11
RUN apt update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y ant && \
apt-get clean;
# Set JAVA_HOME
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64/
RUN export JAVA_HOME
USER airflow
COPY requirements.txt .
RUN pip install -r requirements.txt
in the docker-compose.yml comment the line :
# image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.3}
and uncomment the line build .
Finally run
docker-compose build
docker-compose up
I´m trying to build a pyramid docker container with the code from a repository. I´m new to docker but i´ve tried this in my dockerfile
FROM alpine:3.7
RUN apk add --update\
python3 \
py-pip \
git
RUN pip3 install --upgrade pip
RUN git clone http://my_git_repo test && \
cd test && \
pip3 install -e . && \
initialize_untitled2_db development.ini && \
pserve development.ini
EXPOSE 6543`
everything the container run all commands and everything works but on the last command he fails to start the pyramid app.
then i get following error message:
Traceback (most recent call last):
File "/usr/bin/pserve", line 11, in <module>
sys.exit(main())
File "/usr/lib/python3.6/site-packages/pyramid/scripts/pserve.py", line 32, in main
return command.run()
File "/usr/lib/python3.6/site-packages/pyramid/scripts/pserve.py", line 239, in run
server(app)
File "/usr/lib/python3.6/site-packages/paste/deploy/loadwsgi.py", line 189, in server_wrapper
**context.local_conf)
File "/usr/lib/python3.6/site-packages/paste/deploy/util.py", line 55, in fix_call
val = callable(*args, **kw)
File "/usr/lib/python3.6/site-packages/waitress/__init__.py", line 20, in serve_paste
serve(app, **kw)
File "/usr/lib/python3.6/site-packages/waitress/__init__.py", line 11, in serve
server = _server(app, **kw)
File "/usr/lib/python3.6/site-packages/waitress/server.py", line 85, in create_server
sockinfo=sockinfo)
File "/usr/lib/python3.6/site-packages/waitress/server.py", line 182, in __init__
self.bind_server_socket()
File "/usr/lib/python3.6/site-packages/waitress/server.py", line 294, in bind_server_socket
self.bind(sockaddr)
File "/usr/lib/python3.6/asyncore.py", line 329, in bind
return self.socket.bind(addr)
OSError: [Errno 99] Address not available
The pyramid app works without problem outside a Container. Like i said im new into docker and i cant find the mistake.
The config file for the application runs on the localhost and with port mapping it shouldnt be a problem to run on the localhost of docker too.
Does someone know whats causing this error ?
It seemd to be a problem with "localhost" domain name in the config. I changed it to the local IP-Adress "127.0.0.1" then it worked fine.
Running Raku (previously aka Perl 6) kernel in Jupyter notebook would be great for reproducibility and ease of use (personal view).
I wanted to run the Perl 6 notebook in a docker container and access it in my web browser. For this I created this docker image.
The code to create the docker image was:
FROM sumankhanal/rakudo:2019.07.1
RUN apt-get update \
&& apt-get install -y ca-certificates python3-pip && pip3 install jupyter notebook \
&& zef install Jupyter::Kernel --force-test
ENV TINI_VERSION v0.6.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /usr/bin/tini
RUN chmod +x /usr/bin/tini
ENTRYPOINT ["/usr/bin/tini", "--"]
EXPOSE 8888
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0"]
I am on Windows 10 64 bit and IP address of my docker is 192.168.99.100.
When I tried to run the container with this code:
docker run -it -p 8888:8888 sumankhanal/raku-notebook in the docker terminal
I get this error:
$ docker run -it -p 8888:8888 sumankhanal/raku-notebook
[I 14:26:43.598 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/jupyter_core/application.py", line 267, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1296, in initialize
self.init_webapp()
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1120, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/local/lib/python3.5/dist-packages/tornado/tcpserver.py", line 142, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python3.5/dist-packages/tornado/netutil.py", line 197, in bind_sockets
sock.bind(sockaddr)
OSError: [Errno 99] Cannot assign requested address
Any help ?
You need to add --allow-root in the CMD of your Dockerfile. Also you need to link the kernel with jupyter in your dockerfile
jupyter-kernel.p6 --generate-config
Once you do that you will be able to see the dockerfile. I also noticed that your images size is very huge, you should try and find a better base image for jupyter rather than the one you have.
For more details about installing kernels refer to the below link
https://jupyter.readthedocs.io/en/latest/install-kernel.html