I'm trying to connect to Hive server-2 running inside docker container (from outside the container) via python (PyHive 0.5, python 2.7) using DB-API (asynchronous) example
from pyhive import hive
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
However, I'm getting following error
Traceback (most recent call last):
File "py_2.py", line 4, in <module>
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 64, in connect
return Connection(*args, **kwargs)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 164, in __init__
response = self._client.OpenSession(open_session_req)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 187, in OpenSession
return self.recv_OpenSession()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 199, in recv_OpenSession
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/protocol/TBinaryProtocol.py", line 148, in readMessageBegin
name = self.trans.readAll(sz)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 161, in read
self.__rbuf = BufferIO(self.__trans.read(max(sz, self.__rbuf_size)))
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TSocket.py", line 132, in read
message='TSocket read 0 bytes')
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
The docker image that I'm using is this (tag: mysql_corrected).
It runs following services (as outputted by jps command)
992 Master
1810 RunJar
259 DataNode
2611 Jps
584 ResourceManager
1576 RunJar
681 NodeManager
137 NameNode
426 SecondaryNameNode
1690 RunJar
732 HistoryServer
I'm launching the container using
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -p 18080:18080 -p 10002:10002 -p 10000:10000 -e 3306 -e 9084 -h sandbox -v /home/foodie/docker/w1:/usr/tmp/test rohitbarnwal7/spark:mysql_corrected bash
Furthermore, I perform following steps to launch Hive server inside docker container
Start mysql service: service mysqld start
Switch to directory /usr/local/hive: cd $HIVE_HOME
Launch Hive metastore server: nohup bin/hive --service metastore &
Launch Hive server 2: hive --service hive-server2 (note that thrift-server port is already changed to 10001 in /usr/local/hive/conf/hive-site.xml)
Launch beeline shell: beeline
Connect beeline shell with Hive server-2: !connect jdbc:hive2://localhost:10001/default;transportMode=http;httpPath=cliservice
I've already tried the following things without any luck
Making python 2.7.3 as default python version inside docker container (original default is python 2.6.6, python 2.7.3 is installed inside container but isn't default)
Changing Hive server port to it's' default value: 10000
Trying to connect with Hive server by running same python script inside the container (it still gives the same error)
Related
I have a project in my pc my project contain rabbitmq, cassandra that rabbitmq and cassandra installed on docker. for successfully run project, rabbitmq and postgress container should be start.
I want dockerize my project. the dockerfile is as follow:
FROM python
WORKDIR /docker/test/samt
COPY . /docker/test/samt
RUN pip install --no-cache-dir --upgrade -r /docker/test/samt/requirements.txt
CMD ["python3","app.py","runserver"]
when I create container from created image. I get following error in my python code:
/usr/local/lib/python3.10/site-packages/flask_sqlalchemy/__init__.py:872: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
warnings.warn(FSADeprecationWarning(
Traceback (most recent call last):
File "/docker/test/samt/app.py", line 16, in <module>
FLASK_APP = create_app()
File "/docker/test/samt/project/application.py", line 16, in create_app
rabbit.init_app(app)
File "/docker/test/samt/project/extentions.py", line 27, in init_app
self.connection = pika.BlockingConnection(params)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
this error mean pika can not connect to rabbitmq container
I think my dockerized project can not connect to rabbitmq.but rabbitmq container is started and is all is on same network with type bridge. how can I connect rabbitmq and cassandra container to my dockerized project?
Odoo 15, installed on ubuntu server in docker container
i didnt installed it myself, admin tell me that he cant solve this problem
site1#site1:/var/docker/odoo$ docker exec -it odoo bash
odoo#f6740a7479b8:/$ odoo
2022-07-22 20:23:53,743 59 INFO ? odoo: Odoo version 15.0-20220620
2022-07-22 20:23:53,743 59 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2022-07-22 20:23:53,743 59 INFO ? odoo: addons paths: ['/usr/lib/python3/dist-packages/odoo/addons', '/var/lib/odoo/.local/share/Odoo/addons/15.0', '/var/odoo/custom']
2022-07-22 20:23:53,743 59 INFO ? odoo: database: odoo#db:5432
2022-07-22 20:23:53,914 59 INFO ? odoo.addons.base.models.ir_actions_report: Will use the Wkhtmltopdf binary at /usr/local/bin/wkhtmltopdf
2022-07-22 20:23:54,150 59 INFO ? odoo.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
Traceback (most recent call last):
File "/usr/bin/odoo", line 8, in <module>
odoo.cli.main()
File "/usr/lib/python3/dist-packages/odoo/cli/command.py", line 61, in main
o.run(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 179, in run
main(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 173, in main
rc = odoo.service.server.start(preload=preload, stop=stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 1356, in start
rc = server.run(preload, stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 907, in run
self.start()
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 877, in start
self.socket.bind((self.interface, self.port))
OSError: [Errno 98] Address already in use
when i trying to update module or just execute odoo for outputs, this error occured.
but instad, odoo worked, i can develop and update modules manualy from browser.
i tried next solutions:
adding xmlrpc_port = 7654 to config file
didnt work, error also occured and odoo web intarface isnt available.
changing ports in docker compose file:
version: '3.1'
services:
web:
build: ./etc/odoo
container_name: odoo
depends_on:
- db
ports:
- "8069:8069"
in all variations, didnt help. How to solve that problem?
Your container probably already started odoo and you are trying to start it second time.
To execute commands in container you can try --no-http parameter
I want to create database from command line with docker. I try these lines
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres:9.5
$ docker run -p 8069:8069 --name odoo --link db:db -t odoo -- -d test
But I got these errors
2017-05-17 05:58:37,441 1 INFO ? odoo: Odoo version 10.0-20170207
2017-05-17 05:58:37,442 1 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2017-05-17 05:58:37,442 1 INFO ? odoo: addons paths: ['/var/lib/odoo/addons/10.0', u'/mnt/extra-addons', u'/usr/lib/python2.7/dist-packages/odoo/addons']
2017-05-17 05:58:37,442 1 INFO ? odoo: database: odoo#172.17.0.2:5432
2017-05-17 05:58:37,443 1 INFO ? odoo.sql_db: Connection to the database failed
Traceback (most recent call last):
File "/usr/bin/odoo", line 9, in <module>
odoo.cli.main()
File "/usr/lib/python2.7/dist-packages/odoo/cli/command.py", line 64, in main
o.run(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 164, in run
main(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 138, in main
odoo.service.db._create_empty_database(db_name)
File "/usr/lib/python2.7/dist-packages/odoo/service/db.py", line 79, in _create_empty_database
with closing(db.cursor()) as cr:
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 622, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 164, in __init__
self._cnx = pool.borrow(dsn)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 505, in _locked
return fun(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 573, in borrow
**connection_info)
File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "172.17.0.2" and accepting
TCP/IP connections on port 5432?
What is the problem, are there other solution to do this?
I found data science environment image on Kitematic and so I installed and tried use it. But Although I could successfully ran and logs say that
The IPython Notebook is running at: https://[all ip addresses on your system]:8888/
, I cannot open localhost:8888. Could someone help?
Docker port:8888
MAC IP port: 192.168.99.100:32768
Below is Container log on Kitematic.
Generating a 2048 bit RSA private key
...................................+++
........................................................+++
writing new private key to '/key.pem'
-----
[W 22:46:12.956 NotebookApp](B Unrecognized alias: '--matplotlib=inline', it will probably have no effect.
[I 22:46:12.966 NotebookApp](B Writing notebook server cookie secret to /root/.ipython/profile_default/security/notebook_cookie_secret
[I 22:46:12.968 NotebookApp](B Using MathJax from CDN: https://cdn.mathjax.org/mathjax/latest/MathJax.js
[I 22:46:13.013 NotebookApp](B Serving notebooks from local directory: /notebooks
[I 22:46:13.015 NotebookApp](B 0 active kernels
[I 22:46:13.015 NotebookApp](B The IPython Notebook is running at: https://[all ip addresses on your system]:8888/
[I 22:46:13.016 NotebookApp](B Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 22:46:49.788 NotebookApp](B SSL Error on 7 ('192.168.99.1', 63676): [Errno 1] _ssl.c:510: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
[E 22:46:49.797 NotebookApp](B Uncaught exception
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/http1connection.py", line 693, in _server_request_loop
ret = yield conn.read_response(request_delegate)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 876, in run
yielded = self.gen.throw(*exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/http1connection.py", line 168, in _read_message
quiet_exceptions=iostream.StreamClosedError)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 870, in run
value = future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "<string>", line 3, in raise_exc_info
SSLError: [Errno 1] _ssl.c:510: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
If you are not on Linux, you would need to port-forward 8888 to your actual host (Windows or Mac) through the VM.
See "Connect to a Service running inside a docker container from outside"
VBoxManage controlvm "default" natpf1 "tcp-port8888,tcp,,8888,,8888"
VBoxManage controlvm "default" natpf1 "udp-port8888,udp,,8888,,8888"
(Replace default with the name of your docker-machine: see docker-machine ls)
You can also try and use directly the IP of your docker machine:
docker-machine IP <name of VM>
I am using Celery and RabbitMQ as a message queue, where each is encapsulated in it's own Docker image. When they are connected using the --link parameter in Docker, everything works fine. I've had this setup working for some time now. I want to separate them so that they run on different hosts, so I can no longer use the --link parameter for this. I am getting a gaierror: [Errno -2] Name or service not known when I try to connect using AMQP and don't understand why.
The server is simply using the rabbitmq container on DockerHub:
docker run --rm --name=qrabbit -p 5672:5672 rabbitmq
I can telnet to this successfully:
$ telnet 192.168.99.100 5672
Trying 192.168.99.100...
Connected to 192.168.99.100.
Escape character is '^]'.
abc
^D
AMQP Connection closed by foreign host.
$
... so I know the server is running.
My client looks like this:
import os
from logging import getLogger, StreamHandler, DEBUG
from serverlib import QueueServer, CeleryMonitor
from celery import Celery
from argparse import ArgumentParser
log = getLogger('server')
log.addHandler(StreamHandler())
log.setLevel(DEBUG)
broker_service_host = os.environ.get('MESSAGE_QUEUE_SERVICE_SERVICE_HOST')
broker = 'amqp://{0}'.format(broker_service_host)
host = ''
port = 8000
retry = 5
if __name__ == '__main__':
log.info('connecting to {0}, {1}:{2}, retry={3}'.format(broker, host, port, retry))
app = Celery(broker=broker)
monitor = CeleryMonitor(app, retry=retry)
server = QueueServer((host, port), app)
monitor.start()
try:
log.info('listening on {0}:{1}'.format(host, port))
server.serve_forever()
except KeyboardInterrupt:
log.info('shutdown requested')
except BaseException as e:
log.error(e)
finally:
monitor.shutdown()
I'm somewhat certain the external modules (QueueServer and CeleryMonitor) are not part of the problem, as it runs properly when I do the following:
$ docker run --rm --name=qmaster -e "MESSAGE_QUEUE_SERVICE_SERVICE_HOST=localhost" --link qrabbit:rabbit -p 80:8000 render-task-master
connecting to amqp://localhost, :8000, retry=5
listening on :8000
^Cshutdown requested
$
... but not if I do the following (without the --link parameter):
$ docker run --rm --name=qmaster -e "MESSAGE_QUEUE_SERVICE_SERVICE_HOST=localhost" -p 80:8000 render-task-master
connecting to amqp://localhost, :8000, retry=5
listening on :8000
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/home/celery/serverlib/celerymonitor.py", line 68, in run
'*': self.__state.event
File "/usr/local/lib/python2.7/site-packages/celery/events/__init__.py", line 287, in __init__
self.channel = maybe_channel(channel)
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 1054, in maybe_channel
return channel.default_channel
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.7/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.7/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python2.7/site-packages/amqp/transport.py", line 75, in __init__
socket.SOCK_STREAM, SOL_TCP):
gaierror: [Errno -2] Name or service not known
^Cshutdown requested
$
What is the difference between using and not using the --link parameter that might cause this error?
UPDATE:
I've narrowed it down to an error in the monitor class I created:
recv = self.app.events.Receiver(connection, handlers={
'task-received': self.registerTask,
'task-failed': self.retryTask,
'task-succeeded': self.deregisterTask,
# should process all events to have state up to date
'*': self.__state.event
})
When this is called, it sits for a few seconds (timeout?) and then throws an exception. Any idea why this wouldn't like the amqp URL specified as amqp://localhost but everything works correctly when I use the --link parameter?
Here's the whole method that call is in, for additional context:
def run(self):
log.info('run')
self.__state = self.app.events.State()
with self.app.connection() as connection:
log.info('got a connection')
recv = self.app.events.Receiver(connection, handlers={
'task-received': self.registerTask,
'task-failed': self.retryTask,
'task-succeeded': self.deregisterTask,
# should process all events to have state up to date
'*': self.__state.event
})
log.info('received receiver')
# Capture until shutdown requested
while not self.__shutdown:
log.info('main run loop')
try:
recv.capture(limit=None, timeout=1, wakeup=True)
except timeout:
# timeout exception is fired when nothing occurs
# during timeout. Just ignore it.
pass
I found the issue: I had set CELERY_BROKER_URL in the Docker environment in which the container was running and this was causing the backend to attempt to connect to a hostname that did not exist. Once I un-set the variable, everything hooked up properly in my environment.
$ docker inspect server
<... removed ...>
"Env": [
"MESSAGE_QUEUE_SERVICE_SERVICE_HOST=192.168.99.100",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"PYTHON_VERSION=2.7.10",
"PYTHON_PIP_VERSION=7.1.2",
"CELERY_VERSION=3.1.18",
"CELERY_BROKER_URL=amqp://guest#rabbit"
],
<... removed ...>