Can't connect Celery server to RabbitMQ on localhost - docker

I am using Celery and RabbitMQ as a message queue, where each is encapsulated in it's own Docker image. When they are connected using the --link parameter in Docker, everything works fine. I've had this setup working for some time now. I want to separate them so that they run on different hosts, so I can no longer use the --link parameter for this. I am getting a gaierror: [Errno -2] Name or service not known when I try to connect using AMQP and don't understand why.
The server is simply using the rabbitmq container on DockerHub:
docker run --rm --name=qrabbit -p 5672:5672 rabbitmq
I can telnet to this successfully:
$ telnet 192.168.99.100 5672
Trying 192.168.99.100...
Connected to 192.168.99.100.
Escape character is '^]'.
abc
^D
AMQP Connection closed by foreign host.
$
... so I know the server is running.
My client looks like this:
import os
from logging import getLogger, StreamHandler, DEBUG
from serverlib import QueueServer, CeleryMonitor
from celery import Celery
from argparse import ArgumentParser
log = getLogger('server')
log.addHandler(StreamHandler())
log.setLevel(DEBUG)
broker_service_host = os.environ.get('MESSAGE_QUEUE_SERVICE_SERVICE_HOST')
broker = 'amqp://{0}'.format(broker_service_host)
host = ''
port = 8000
retry = 5
if __name__ == '__main__':
log.info('connecting to {0}, {1}:{2}, retry={3}'.format(broker, host, port, retry))
app = Celery(broker=broker)
monitor = CeleryMonitor(app, retry=retry)
server = QueueServer((host, port), app)
monitor.start()
try:
log.info('listening on {0}:{1}'.format(host, port))
server.serve_forever()
except KeyboardInterrupt:
log.info('shutdown requested')
except BaseException as e:
log.error(e)
finally:
monitor.shutdown()
I'm somewhat certain the external modules (QueueServer and CeleryMonitor) are not part of the problem, as it runs properly when I do the following:
$ docker run --rm --name=qmaster -e "MESSAGE_QUEUE_SERVICE_SERVICE_HOST=localhost" --link qrabbit:rabbit -p 80:8000 render-task-master
connecting to amqp://localhost, :8000, retry=5
listening on :8000
^Cshutdown requested
$
... but not if I do the following (without the --link parameter):
$ docker run --rm --name=qmaster -e "MESSAGE_QUEUE_SERVICE_SERVICE_HOST=localhost" -p 80:8000 render-task-master
connecting to amqp://localhost, :8000, retry=5
listening on :8000
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/home/celery/serverlib/celerymonitor.py", line 68, in run
'*': self.__state.event
File "/usr/local/lib/python2.7/site-packages/celery/events/__init__.py", line 287, in __init__
self.channel = maybe_channel(channel)
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 1054, in maybe_channel
return channel.default_channel
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.7/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.7/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python2.7/site-packages/amqp/transport.py", line 75, in __init__
socket.SOCK_STREAM, SOL_TCP):
gaierror: [Errno -2] Name or service not known
^Cshutdown requested
$
What is the difference between using and not using the --link parameter that might cause this error?
UPDATE:
I've narrowed it down to an error in the monitor class I created:
recv = self.app.events.Receiver(connection, handlers={
'task-received': self.registerTask,
'task-failed': self.retryTask,
'task-succeeded': self.deregisterTask,
# should process all events to have state up to date
'*': self.__state.event
})
When this is called, it sits for a few seconds (timeout?) and then throws an exception. Any idea why this wouldn't like the amqp URL specified as amqp://localhost but everything works correctly when I use the --link parameter?
Here's the whole method that call is in, for additional context:
def run(self):
log.info('run')
self.__state = self.app.events.State()
with self.app.connection() as connection:
log.info('got a connection')
recv = self.app.events.Receiver(connection, handlers={
'task-received': self.registerTask,
'task-failed': self.retryTask,
'task-succeeded': self.deregisterTask,
# should process all events to have state up to date
'*': self.__state.event
})
log.info('received receiver')
# Capture until shutdown requested
while not self.__shutdown:
log.info('main run loop')
try:
recv.capture(limit=None, timeout=1, wakeup=True)
except timeout:
# timeout exception is fired when nothing occurs
# during timeout. Just ignore it.
pass

I found the issue: I had set CELERY_BROKER_URL in the Docker environment in which the container was running and this was causing the backend to attempt to connect to a hostname that did not exist. Once I un-set the variable, everything hooked up properly in my environment.
$ docker inspect server
<... removed ...>
"Env": [
"MESSAGE_QUEUE_SERVICE_SERVICE_HOST=192.168.99.100",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"PYTHON_VERSION=2.7.10",
"PYTHON_PIP_VERSION=7.1.2",
"CELERY_VERSION=3.1.18",
"CELERY_BROKER_URL=amqp://guest#rabbit"
],
<... removed ...>

Related

Expose port using DockerOperator

I am using DockerOperator to run a container. But I do not see any related option to publish required port. I need to publish a webserver port when the task is triggered. Any help or guide will be helpful. Thank you!
First, don't forget docker_operator is deprecated, replaced (now) with providers.docker.operators.docker.
Second, I don't know of a command to expose a port in a live (running) Docker container.
As described in this article from Sidhartha Mani
Specifically, I needed access to the filled mysql database. .
I could think of a few ways to do this:
Stop the container and start a new one with the added port exposure. docker run -p 3306:3306 -p 8080:8080 -d java/server.
The second option is to start another container that links to this, and knows how to port forward.
Setup iptables rules to forward a host port into the container.
So:
Following existing rules, I created my own rule to forward to the container
iptables -t nat -D DOCKER ! -i docker0 -p tcp --dport 3306-j DNAT \
--to-destination 172.17.0.2:3306
This just says that whenever a packet is destined to port 3306 on the host, forward it to the container with ip 172.17.0.2, and its port 3306.
Once I did this, I could connect to the container using host port 3306.
I wanted to make it easier for others to expose ports on live containers.
So, I created a small repository and a corresponding docker image (named wlan0/redirect).
The same effect as exposing host port 3306 to container 172.17.0.2:3306 can be achieved using this command.
This command saves the trouble of learning how to use iptables.
docker run --privileged -v /proc:/host/proc \
-e HOST_PORT=3306 -e DEST_IP=172.17.0.2 -e DEST_PORT=3306 \
wlan0/redirect:latest
In other words, this kind of solution would not be implemented from a command run in the container, through an Airflow Operator.
As per my understanding DockerOperator will create a new container, then why is there no way of exposing ports while create a new container.
First, the EXPOSE part is, as I mentioned here, just a metadata added to the image. It is not mandatory.
The runtime (docker run) -p option is about publishing, not exposing: publishing a port and mapping it to a host port (see above) or another container port.
That might be not needed with an Airflow environment, where there is a default network, and even the possibility to setup a custom network or subnetwork.
Which means other (Airflow) containers attached to the same network should be able to access a ports of any container in said network, without needing any -p (publication) or EXPOSE directive.
In order to accomplish this, you will need to subclass the DockerOperator and override the initializer and _run_image_with_mounts method, which uses the API client to create a container with the specified host configuration.
class DockerOperatorWithExposedPorts(DockerOperator):
def __init__(self, *args, **kwargs):
self.port_bindings = kwargs.pop("port_bindings", {})
if self.port_bindings and kwargs.get("network_mode") == "host":
self.log.warning("`port_bindings` is not supported in `host` network mode.")
self.port_bindings = {}
super().__init__(*args, **kwargs)
def _run_image_with_mounts(
self, target_mounts, add_tmp_variable: bool
) -> Optional[Union[List[str], str]]:
"""
NOTE: This method was copied entirely from the base class `DockerOperator`, for the capability
of performing port publishing.
"""
if add_tmp_variable:
self.environment['AIRFLOW_TMP_DIR'] = self.tmp_dir
else:
self.environment.pop('AIRFLOW_TMP_DIR', None)
if not self.cli:
raise Exception("The 'cli' should be initialized before!")
self.container = self.cli.create_container(
command=self.format_command(self.command),
name=self.container_name,
environment={**self.environment, **self._private_environment},
ports=list(self.port_bindings.keys()) if self.port_bindings else None,
host_config=self.cli.create_host_config(
auto_remove=False,
mounts=target_mounts,
network_mode=self.network_mode,
shm_size=self.shm_size,
dns=self.dns,
dns_search=self.dns_search,
cpu_shares=int(round(self.cpus * 1024)),
port_bindings=self.port_bindings if self.port_bindings else None,
mem_limit=self.mem_limit,
cap_add=self.cap_add,
extra_hosts=self.extra_hosts,
privileged=self.privileged,
device_requests=self.device_requests,
),
image=self.image,
user=self.user,
entrypoint=self.format_command(self.entrypoint),
working_dir=self.working_dir,
tty=self.tty,
)
logstream = self.cli.attach(container=self.container['Id'], stdout=True, stderr=True, stream=True)
try:
self.cli.start(self.container['Id'])
log_lines = []
for log_chunk in logstream:
log_chunk = stringify(log_chunk).strip()
log_lines.append(log_chunk)
self.log.info("%s", log_chunk)
result = self.cli.wait(self.container['Id'])
if result['StatusCode'] != 0:
joined_log_lines = "\n".join(log_lines)
raise AirflowException(f'Docker container failed: {repr(result)} lines {joined_log_lines}')
if self.retrieve_output:
return self._attempt_to_retrieve_result()
elif self.do_xcom_push:
if len(log_lines) == 0:
return None
try:
if self.xcom_all:
return log_lines
else:
return log_lines[-1]
except StopIteration:
# handle the case when there is not a single line to iterate on
return None
return None
finally:
if self.auto_remove == "success":
self.cli.remove_container(self.container['Id'])
elif self.auto_remove == "force":
self.cli.remove_container(self.container['Id'], force=True)
Explanation: The create_host_config method of the APIClient has an optional port_bindings keyword argument, and create_container method has an optional ports argument. These calls aren't exposed in the DockerOperator, so you have to copy the _run_image_with_mounts method and override it with a copy and supply those arguments with the port_bindings field set in the initializer. You can then supply the ports to publish as a keyword argument. Note that in this implementation, the expectation is argument is a dictionary:
t1 = DockerOperatorWithExposedPorts(image=..., task_id=..., port_bindings={5000: 5000, 8080:8080, ...})

flask docker - equests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /users

I have python flask application that listens to another microservice running on 8080. When I run as flask application it is able to get from http://localhost:8080/users.
But when I run inside docker it fails with
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /users (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f53804b5dc0>: Failed to establish a new connection: [Errno 111] Connection refused'))
at line:
File "/alert.py", line 45, in get_users
r = requests.get('http://localhost:8080/users',verify=False)
This is my main code:
if __name__ == '__main__':
app.logger.setLevel(logging.INFO)
data = create_data()
port = os.getenv('PORT')
app.run(debug=True, host='0.0.0.0', port=port)
Docker script:
#!/bin/bash
docker build -t covid_service .
docker run -p 5000:5000 covid_service
Docker file:
FROM python:3
ADD alert.py /
RUN pip install flask
RUN pip install requests
EXPOSE 5000
CMD [ "python", "./alert.py" ]
Any help please
equests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /users (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f53804b5dc0>: Failed to establish a new connection: [Errno 111] Connection refused'))
You can not access another container using localhost, localhost mean flask app container not another microservice.
change the localhost to something HOST_IP on Linux or use host.docker.internal for window and mac.
r = requests.get('http://host.docker.internal:8080/users',verify=False)
port = os.getenv('PORT')
You are not receiving port here, because Dockerfile and start command do not contain it.
Either edit Dockerfile - add
ENV PORT=5000
or add -env PORT=5000 to start command
If you need reach service that working in docker and using your Docker file, you should query http://localhost:5000/users

How to make Flask app debug mode to True in a Docker container

I'm running a Flask app in a Docker container but I'm having issues in debugging. In my container I have three micro-services.
docker-compose.yml
version: '2.1'
services:
files:
image: busybox
volumes:
[..]
grafana:
[..]
prometheus:
[..]
aggregatore:
[..]
classificatore:
build: classificatore/.
volumes:
- [..]
volumes_from:
- files
ports:
- [..]
command: ["python", "/src/main.py"]
depends_on:
rabbit:
condition: service_healthy
testmicro:
[..]
rabbit:
[..]
In the classificatore service, I build up the Docker as follows:
classificatore/Dockerfile
FROM python:3
RUN mkdir /src
ADD requirements.txt /src/.
WORKDIR /src
RUN pip install -r requirements.txt
ADD . /src/.
RUN mkdir -p /tmp/reqdoc
CMD ["python", "main.py"]
In classificatore/main.py file
from time import time
from sam import firstRead, secondRead, lastRead, createClassificationMatrix
from sam import splitClassificationMatrix, checkIfNeedSplit, printMatrix
from util import Rabbit, log, moveFile
from uuid import uuid4
from flask import Flask, request, render_template, redirect, send_from_directory
import os
import configparser
import json
from prometheus_client import start_http_server, Summary, Counter
config = configparser.ConfigParser()
config.read('config.ini')
rabbit = Rabbit()
inputDir = os.environ['INPUT_DIR'] if 'INPUT_DIR' in os.environ else config['DEFAULT']['INPUT_DIR']
# Create a metric to track time spent
REQUEST_TIME = Summary('classification_processing_seconds', 'Time spent to process a SAM file')
COUNTER_INPUT_FILE_SIZE = Counter('input_sam_size', 'Sum of input SAM file size')
COUNTER_OUTPUT_FILE_SIZE = Counter('output_sam_size', 'Sum of output SAM file size')
start_http_server(8000)
#REQUEST_TIME.time()
def classification(baseNameFile, AU_SIZE):
nameFile = inputDir + "/" + baseNameFile
startTime = time()
numeroLetture = 1
file_id = str(uuid4())
log.info("Analizzo il file YYYYY (NomeFile: %s, Id: %s, AU_SIZE: %s)" % (nameFile, file_id, AU_SIZE))
rnameArray, parameter_set = firstRead(nameFile)
classificationMatrix = createClassificationMatrix(rnameArray)
log.info("Creo un numero di range che dovrebbe dividire il file in file da %s reads" % (AU_SIZE))
while (checkIfNeedSplit(classificationMatrix, AU_SIZE)):
classificationMatrix = splitClassificationMatrix(classificationMatrix, AU_SIZE)
log.info("Leggo il file di nuovo, perche' alcuni range sono troppo grandi")
classificationMatrix = secondRead(nameFile, classificationMatrix)
numeroLetture = numeroLetture + 1
printMatrix(classificationMatrix)
log.info("Sono state fatte %s letture" % (numeroLetture))
log.info("Adesso scrivo i file")
au_list = lastRead(nameFile, file_id, classificationMatrix, parameter_set['myRnameDict'])
COUNTER_INPUT_FILE_SIZE.inc(os.path.getsize(nameFile))
COUNTER_OUTPUT_FILE_SIZE.inc(moveFile(au_list, file_id))
rabbit.enque_tasks(parameter_set, au_list, file_id)
log.info("Tempo totale impiegato: %s sec" % int(time() - startTime))
app = Flask( __name__ , template_folder='./web')
#app.route("/")
def index(message=None):
log.info("Sono PRin index!!!")
samFiles = os.listdir(config['DEFAULT']['INPUT_DIR'])
samFiles = list(filter(lambda x: x.endswith('.sam'), samFiles))
samFiles.sort()
mpeggFiles = os.listdir(config['DEFAULT']['MPEGG_DIR'])
mpeggFiles.sort()
mpeggFiles = list(filter(lambda x: x.endswith('.mpegg'), mpeggFiles))
return render_template('index.html', samFiles=samFiles, mpeggFiles=mpeggFiles, message=message)
#app.route("/upload", methods=['POST'])
def upload():
f = request.files['file']
f.save(os.path.join(config['DEFAULT']['INPUT_DIR'], f.filename))
return index("Upload avvenuto con successo")
#app.route("/encode", methods=['POST'])
def encode():
filename = request.form['filename']
AU_SIZE = int(request.form['AU_SIZE'])
classification(filename, AU_SIZE)
return index("Encoding iniziato correttamente per il file: %s" % (filename))
#app.route('/download/<filename>', methods=['GET', 'POST'])
def download(filename):
log.info ("Download %s" % filename)
mpeggDir = config['DEFAULT']['MPEGG_DIR']
log.debug ("mpeggDir: %s" % mpeggDir)
filepath = os.path.join(mpeggDir, filename)
log.debug ("My filepath: %s" % filepath)
return send_from_directory(directory=mpeggDir, filename=filename)
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = False )
I build-up the app by running:
$ docker-compose build
$ docker-compose up -d
To check logs in classificatore:
docker logs <mycontainername>
If in classificatore/main.py
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = False )
I get
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
2019-05-03 08:38:25,406 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
If in classificatore/main.py I set debug to True
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = True )
I get
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
2019-05-03 08:40:57,857 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2019-05-03 08:40:57,858 * Restarting with stat
Traceback (most recent call last):
File "/src/main.py", line 22, in <module>
start_http_server(8000)
File "/usr/local/lib/python3.7/site-packages/prometheus_client/exposition.py", line 181, in start_http_server
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
File "/usr/local/lib/python3.7/socketserver.py", line 452, in __init__
self.server_bind()
File "/usr/local/lib/python3.7/http/server.py", line 137, in server_bind
socketserver.TCPServer.server_bind(self)
File "/usr/local/lib/python3.7/socketserver.py", line 466, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 98] Address already in use
I guess I'm messing around with ports but I'm still a newby in Docker.
Any help will be very welcome!
Thank you in advance
EDIT 1: the output of $docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb7c9a5b80eb encoder_mpeg-pre-encoder "python main.py" 2 minutes ago Up 12 seconds encoder_mpeg-pre-encoder_1
6a523161c191 encoder_classificatore "python /src/main.py" 2 minutes ago Exited (1) 11 seconds ago encoder_classificatore_1
e5d0287e9129 encoder_aggregatore "python /src/main.py" 5 minutes ago Up 12 seconds 0.0.0.0:8000->8000/tcp encoder_aggregatore_1
907327ef0342 grafana/grafana:5.1.0 "/run.sh" 6 minutes ago Up 18 seconds 0.0.0.0:3000->3000/tcp encoder_grafana_1
e57064e76aa1 busybox "sh" 6 minutes ago Exited (0) 18 seconds ago encoder_files_1
2b42907a31c4 rabbitmq "docker-entrypoint.s…" 6 minutes ago Up 18 seconds (healthy) 4369/tcp, 5671/tcp, 25672/tcp, 0.0.0.0:5672->5672/tcp encoder_rabbit_1
3f509108b69d prom/prometheus "/bin/prometheus --c…" 6 minutes ago Up 18 seconds 0.0.0.0:9090->9090/tcp encoder_prometheus_1
I guess you are changing the file inside docker. In an ideal environment, you need to change it at the host where actual development is happening & then build & run the compose.
Change classificatore/main.py file at docker host -
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = True )
Build and run the app again -
$ docker-compose build
$ docker-compose up -d
It's best to use environment variables in such cases so that you need not change your source code every time for the debug switch.
To build the compose again from scratch run below commands -
$ docker-compose down -v
$ docker-compose build --no-cache
$ docker-compose up -d
In case you still receive errors, share the output of docker ps after running above commands.
I could see below error -
File "/src/main.py", line 22, in <module>
start_http_server(8000)
Your docker ps output shows that something else is already running on port 8000, probably aggegatore. Post that start_http_server(8000) is also trying to run on port 8000 on the same network which is causing the conflict here. Try changing the ports in the desired way so that conflict doesn't occur.
EDIT 1:
By using DEBUG=True you are telling Flask to reload the server each time main.py changes. In doing so, it calls main.py each time, killing the app and then restarting it in port 5000 in the process. That is expected behaviour.
The problem is that you also have a call to start_http_server(8000) that creates a server in port 8000. Flask does not handle this process, leading to an Exception because the previous instance is already using the port.
The error traceback is clear about this (OSError: [Errno 98] Address already in use), but the hint is that the error happens after restarting the server.
2019-05-03 08:40:57,857 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2019-05-03 08:40:57,858 * Restarting with stat
Traceback (most recent call last):
File "/src/main.py", line 22, in <module>
start_http_server(8000)
File "/usr/local/lib/python3.7/site-packages/prometheus_client/exposition.py", line 181, in start_http_server
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
You'll need to handle the lifecycle of that service outside of the main.py script, or handle the exception.
EDIT 2:
Your problem has nothing to do with docker, and is more about setting up prometheus inside a Flask application. Note that prometheus_client/exposition.py is the one that is raising the exception.
There are some extensions that help with this, for instance:
https://github.com/sbarratt/flask-prometheus or
https://github.com/hemajv/flask-prometheus
Maybe this sheds some light on the solution as well, but please note that this is not what you are asking here.
EDIT 3:
I suggest first giving these extensions a shot, which means refactoring your code. From there and if there is a problem implementing the extensions, then create another question providing a mcve.

PyHive Thrift transport exception: read 0 bytes

I'm trying to connect to Hive server-2 running inside docker container (from outside the container) via python (PyHive 0.5, python 2.7) using DB-API (asynchronous) example
from pyhive import hive
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
However, I'm getting following error
Traceback (most recent call last):
File "py_2.py", line 4, in <module>
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 64, in connect
return Connection(*args, **kwargs)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 164, in __init__
response = self._client.OpenSession(open_session_req)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 187, in OpenSession
return self.recv_OpenSession()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 199, in recv_OpenSession
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/protocol/TBinaryProtocol.py", line 148, in readMessageBegin
name = self.trans.readAll(sz)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 161, in read
self.__rbuf = BufferIO(self.__trans.read(max(sz, self.__rbuf_size)))
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TSocket.py", line 132, in read
message='TSocket read 0 bytes')
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
The docker image that I'm using is this (tag: mysql_corrected).
It runs following services (as outputted by jps command)
992 Master
1810 RunJar
259 DataNode
2611 Jps
584 ResourceManager
1576 RunJar
681 NodeManager
137 NameNode
426 SecondaryNameNode
1690 RunJar
732 HistoryServer
I'm launching the container using
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -p 18080:18080 -p 10002:10002 -p 10000:10000 -e 3306 -e 9084 -h sandbox -v /home/foodie/docker/w1:/usr/tmp/test rohitbarnwal7/spark:mysql_corrected bash
Furthermore, I perform following steps to launch Hive server inside docker container
Start mysql service: service mysqld start
Switch to directory /usr/local/hive: cd $HIVE_HOME
Launch Hive metastore server: nohup bin/hive --service metastore &
Launch Hive server 2: hive --service hive-server2 (note that thrift-server port is already changed to 10001 in /usr/local/hive/conf/hive-site.xml)
Launch beeline shell: beeline
Connect beeline shell with Hive server-2: !connect jdbc:hive2://localhost:10001/default;transportMode=http;httpPath=cliservice
I've already tried the following things without any luck
Making python 2.7.3 as default python version inside docker container (original default is python 2.6.6, python 2.7.3 is installed inside container but isn't default)
Changing Hive server port to it's' default value: 10000
Trying to connect with Hive server by running same python script inside the container (it still gives the same error)

Odoo docker, how to create database from command lines?

I want to create database from command line with docker. I try these lines
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres:9.5
$ docker run -p 8069:8069 --name odoo --link db:db -t odoo -- -d test
But I got these errors
2017-05-17 05:58:37,441 1 INFO ? odoo: Odoo version 10.0-20170207
2017-05-17 05:58:37,442 1 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2017-05-17 05:58:37,442 1 INFO ? odoo: addons paths: ['/var/lib/odoo/addons/10.0', u'/mnt/extra-addons', u'/usr/lib/python2.7/dist-packages/odoo/addons']
2017-05-17 05:58:37,442 1 INFO ? odoo: database: odoo#172.17.0.2:5432
2017-05-17 05:58:37,443 1 INFO ? odoo.sql_db: Connection to the database failed
Traceback (most recent call last):
File "/usr/bin/odoo", line 9, in <module>
odoo.cli.main()
File "/usr/lib/python2.7/dist-packages/odoo/cli/command.py", line 64, in main
o.run(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 164, in run
main(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 138, in main
odoo.service.db._create_empty_database(db_name)
File "/usr/lib/python2.7/dist-packages/odoo/service/db.py", line 79, in _create_empty_database
with closing(db.cursor()) as cr:
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 622, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 164, in __init__
self._cnx = pool.borrow(dsn)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 505, in _locked
return fun(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 573, in borrow
**connection_info)
File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "172.17.0.2" and accepting
TCP/IP connections on port 5432?
What is the problem, are there other solution to do this?

Resources