Can't connect Dockerized project to RabbitMQ or Cassandra container - docker

I have a project in my pc my project contain rabbitmq, cassandra that rabbitmq and cassandra installed on docker. for successfully run project, rabbitmq and postgress container should be start.
I want dockerize my project. the dockerfile is as follow:
FROM python
WORKDIR /docker/test/samt
COPY . /docker/test/samt
RUN pip install --no-cache-dir --upgrade -r /docker/test/samt/requirements.txt
CMD ["python3","app.py","runserver"]
when I create container from created image. I get following error in my python code:
/usr/local/lib/python3.10/site-packages/flask_sqlalchemy/__init__.py:872: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
warnings.warn(FSADeprecationWarning(
Traceback (most recent call last):
File "/docker/test/samt/app.py", line 16, in <module>
FLASK_APP = create_app()
File "/docker/test/samt/project/application.py", line 16, in create_app
rabbit.init_app(app)
File "/docker/test/samt/project/extentions.py", line 27, in init_app
self.connection = pika.BlockingConnection(params)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
this error mean pika can not connect to rabbitmq container
I think my dockerized project can not connect to rabbitmq.but rabbitmq container is started and is all is on same network with type bridge. how can I connect rabbitmq and cassandra container to my dockerized project?

Related

Odoo: error 98 Adress already in use, how to fix?

Odoo 15, installed on ubuntu server in docker container
i didnt installed it myself, admin tell me that he cant solve this problem
site1#site1:/var/docker/odoo$ docker exec -it odoo bash
odoo#f6740a7479b8:/$ odoo
2022-07-22 20:23:53,743 59 INFO ? odoo: Odoo version 15.0-20220620
2022-07-22 20:23:53,743 59 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2022-07-22 20:23:53,743 59 INFO ? odoo: addons paths: ['/usr/lib/python3/dist-packages/odoo/addons', '/var/lib/odoo/.local/share/Odoo/addons/15.0', '/var/odoo/custom']
2022-07-22 20:23:53,743 59 INFO ? odoo: database: odoo#db:5432
2022-07-22 20:23:53,914 59 INFO ? odoo.addons.base.models.ir_actions_report: Will use the Wkhtmltopdf binary at /usr/local/bin/wkhtmltopdf
2022-07-22 20:23:54,150 59 INFO ? odoo.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
Traceback (most recent call last):
File "/usr/bin/odoo", line 8, in <module>
odoo.cli.main()
File "/usr/lib/python3/dist-packages/odoo/cli/command.py", line 61, in main
o.run(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 179, in run
main(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 173, in main
rc = odoo.service.server.start(preload=preload, stop=stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 1356, in start
rc = server.run(preload, stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 907, in run
self.start()
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 877, in start
self.socket.bind((self.interface, self.port))
OSError: [Errno 98] Address already in use
when i trying to update module or just execute odoo for outputs, this error occured.
but instad, odoo worked, i can develop and update modules manualy from browser.
i tried next solutions:
adding xmlrpc_port = 7654 to config file
didnt work, error also occured and odoo web intarface isnt available.
changing ports in docker compose file:
version: '3.1'
services:
web:
build: ./etc/odoo
container_name: odoo
depends_on:
- db
ports:
- "8069:8069"
in all variations, didnt help. How to solve that problem?
Your container probably already started odoo and you are trying to start it second time.
To execute commands in container you can try --no-http parameter

How to resolve exceptions.OSError: [Errno 1] Operation not permitted (docker container)?

I am trying to scan BLE devices with bluepy. My scan.py code is --
from bluepy.btle import Scanner, DefaultDelegate
class ScanDelegate(DefaultDelegate):
def __init__(self):
DefaultDelegate.__init__(self)
def handleDiscovery(self, dev, isNewDev, isNewData):
if isNewDev:
print "Discovered device", dev.addr
elif isNewData:
print "Received new data from", dev.addr
# prepare scanner
scanner = Scanner().withDelegate(ScanDelegate())
# scan for 5 seconds
devices = scanner.scan(5.0)
for dev in devices:
print "Device %s (%s), RSSI=%d dB" % (dev.addr, dev.addrType, dev.rssi)
for (adtype, desc, value) in dev.getScanData():
print " %s = %s" % (desc, value)
According to the documentation (mentioned at the very last as Note) --
(1) LE scanning must be run as root
That means we need to run the script with sudo. I run it as --
sudo python scan.py
Basically bluepy-helper requires the sudo to scan. It is required set the capabilities for blupe-helper to run the code without sudo. According to the solution, I did --
sudo setcap 'cap_net_raw,cap_net_admin+eip' /usr/local/lib/python2.7/site-packages/bluepy/bluepy-helper
From the Terminal, the scan code is now run without sudo like --
python scan.py
Finally, I made a Dockerfile --
FROM arm32v7/python:2.7.15-jessie
WORKDIR /usr/app/gfi_ble
COPY . /usr/app/gfi_ble
RUN chmod 755 ./setcap_for_bluepy_helper.sh
RUN pip install -r requirements.txt
CMD ["./setcap_for_bluepy_helper.sh", "--", "python", "src/scan.py"]
The content of the setcap_for_bluepy_helper.sh is --
#!/bin/bash
cmd="$#"
>&2 setcap 'cap_net_raw,cap_net_admin+eip' /usr/local/lib/python2.7/site-packages/bluepy/bluepy-helper
exec $cmd
The image is created successfully but when I run the container I am getting the error like --
Creating con_gfi_ble ... done
Attaching to con_gfi_ble
con_gfi_ble | 2019-01-12 23:06:24+0000 [-] Unhandled Error
con_gfi_ble | Traceback (most recent call last):
con_gfi_ble | File "/usr/app/gfi_ble/src/scan.py", line 17, in new_devices
con_gfi_ble | devices = scanner.scan(5.0)
con_gfi_ble | File "/usr/local/lib/python2.7/site-packages/bluepy/btle.py", line 852, in scan
con_gfi_ble | self.start(passive=passive)
con_gfi_ble | File "/usr/local/lib/python2.7/site-packages/bluepy/btle.py", line 789, in start
con_gfi_ble | self._startHelper(iface=self.iface)
con_gfi_ble | File "/usr/local/lib/python2.7/site-packages/bluepy/btle.py", line 284, in _startHelper
con_gfi_ble | preexec_fn = preexec_function)
con_gfi_ble | File "/usr/local/lib/python2.7/subprocess.py", line 394, in __init__
con_gfi_ble | errread, errwrite)
con_gfi_ble | File "/usr/local/lib/python2.7/subprocess.py", line 1047, in _execute_child
con_gfi_ble | raise child_exception
con_gfi_ble | exceptions.OSError: [Errno 1] Operation not permitted
con_gfi_ble |
Question: What does exceptions.OSError: [Errno 1] Operation not permitted?
My code is fine when I run it from Terminal. What's wrong with the container? Any idea!
Docker containers run with reduced capabilities. This prevents root inside a container from escaping the container by running kernel commands without namespaces, and accessing parts of the host outside of the container, like raw network interfaces or physical devices. You need to add capabilities to the container externally if you need them, but understand this reduces the security provided by docker's default settings.
From docker run, this looks like:
docker run --cap-add=NET_ADMIN --cap-add=NET_RAW ...
https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
In a compose file, this looks like:
version: '2'
services:
app:
image: your_image
cap_add:
- NET_ADMIN
- NET_RAW
Ref: https://docs.docker.com/compose/compose-file/
This will not work with swarm mode. Work is ongoing for adding the ability to run commands with added capabilities within swarm mode. There are ugly workarounds if you need this.
Note that you should not be running sudo inside of a container. Doing so means everything has access to promote itself to root and defeats the purpose of running anything as a user. Instead you should start the container as root and drop to a regular user as soon as possible, which is a one way operation.

PyHive Thrift transport exception: read 0 bytes

I'm trying to connect to Hive server-2 running inside docker container (from outside the container) via python (PyHive 0.5, python 2.7) using DB-API (asynchronous) example
from pyhive import hive
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
However, I'm getting following error
Traceback (most recent call last):
File "py_2.py", line 4, in <module>
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 64, in connect
return Connection(*args, **kwargs)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 164, in __init__
response = self._client.OpenSession(open_session_req)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 187, in OpenSession
return self.recv_OpenSession()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 199, in recv_OpenSession
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/protocol/TBinaryProtocol.py", line 148, in readMessageBegin
name = self.trans.readAll(sz)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 161, in read
self.__rbuf = BufferIO(self.__trans.read(max(sz, self.__rbuf_size)))
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TSocket.py", line 132, in read
message='TSocket read 0 bytes')
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
The docker image that I'm using is this (tag: mysql_corrected).
It runs following services (as outputted by jps command)
992 Master
1810 RunJar
259 DataNode
2611 Jps
584 ResourceManager
1576 RunJar
681 NodeManager
137 NameNode
426 SecondaryNameNode
1690 RunJar
732 HistoryServer
I'm launching the container using
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -p 18080:18080 -p 10002:10002 -p 10000:10000 -e 3306 -e 9084 -h sandbox -v /home/foodie/docker/w1:/usr/tmp/test rohitbarnwal7/spark:mysql_corrected bash
Furthermore, I perform following steps to launch Hive server inside docker container
Start mysql service: service mysqld start
Switch to directory /usr/local/hive: cd $HIVE_HOME
Launch Hive metastore server: nohup bin/hive --service metastore &
Launch Hive server 2: hive --service hive-server2 (note that thrift-server port is already changed to 10001 in /usr/local/hive/conf/hive-site.xml)
Launch beeline shell: beeline
Connect beeline shell with Hive server-2: !connect jdbc:hive2://localhost:10001/default;transportMode=http;httpPath=cliservice
I've already tried the following things without any luck
Making python 2.7.3 as default python version inside docker container (original default is python 2.6.6, python 2.7.3 is installed inside container but isn't default)
Changing Hive server port to it's' default value: 10000
Trying to connect with Hive server by running same python script inside the container (it still gives the same error)

Can't connect Celery server to RabbitMQ on localhost

I am using Celery and RabbitMQ as a message queue, where each is encapsulated in it's own Docker image. When they are connected using the --link parameter in Docker, everything works fine. I've had this setup working for some time now. I want to separate them so that they run on different hosts, so I can no longer use the --link parameter for this. I am getting a gaierror: [Errno -2] Name or service not known when I try to connect using AMQP and don't understand why.
The server is simply using the rabbitmq container on DockerHub:
docker run --rm --name=qrabbit -p 5672:5672 rabbitmq
I can telnet to this successfully:
$ telnet 192.168.99.100 5672
Trying 192.168.99.100...
Connected to 192.168.99.100.
Escape character is '^]'.
abc
^D
AMQP Connection closed by foreign host.
$
... so I know the server is running.
My client looks like this:
import os
from logging import getLogger, StreamHandler, DEBUG
from serverlib import QueueServer, CeleryMonitor
from celery import Celery
from argparse import ArgumentParser
log = getLogger('server')
log.addHandler(StreamHandler())
log.setLevel(DEBUG)
broker_service_host = os.environ.get('MESSAGE_QUEUE_SERVICE_SERVICE_HOST')
broker = 'amqp://{0}'.format(broker_service_host)
host = ''
port = 8000
retry = 5
if __name__ == '__main__':
log.info('connecting to {0}, {1}:{2}, retry={3}'.format(broker, host, port, retry))
app = Celery(broker=broker)
monitor = CeleryMonitor(app, retry=retry)
server = QueueServer((host, port), app)
monitor.start()
try:
log.info('listening on {0}:{1}'.format(host, port))
server.serve_forever()
except KeyboardInterrupt:
log.info('shutdown requested')
except BaseException as e:
log.error(e)
finally:
monitor.shutdown()
I'm somewhat certain the external modules (QueueServer and CeleryMonitor) are not part of the problem, as it runs properly when I do the following:
$ docker run --rm --name=qmaster -e "MESSAGE_QUEUE_SERVICE_SERVICE_HOST=localhost" --link qrabbit:rabbit -p 80:8000 render-task-master
connecting to amqp://localhost, :8000, retry=5
listening on :8000
^Cshutdown requested
$
... but not if I do the following (without the --link parameter):
$ docker run --rm --name=qmaster -e "MESSAGE_QUEUE_SERVICE_SERVICE_HOST=localhost" -p 80:8000 render-task-master
connecting to amqp://localhost, :8000, retry=5
listening on :8000
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/home/celery/serverlib/celerymonitor.py", line 68, in run
'*': self.__state.event
File "/usr/local/lib/python2.7/site-packages/celery/events/__init__.py", line 287, in __init__
self.channel = maybe_channel(channel)
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 1054, in maybe_channel
return channel.default_channel
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python2.7/site-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/local/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.7/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/local/lib/python2.7/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/local/lib/python2.7/site-packages/amqp/transport.py", line 75, in __init__
socket.SOCK_STREAM, SOL_TCP):
gaierror: [Errno -2] Name or service not known
^Cshutdown requested
$
What is the difference between using and not using the --link parameter that might cause this error?
UPDATE:
I've narrowed it down to an error in the monitor class I created:
recv = self.app.events.Receiver(connection, handlers={
'task-received': self.registerTask,
'task-failed': self.retryTask,
'task-succeeded': self.deregisterTask,
# should process all events to have state up to date
'*': self.__state.event
})
When this is called, it sits for a few seconds (timeout?) and then throws an exception. Any idea why this wouldn't like the amqp URL specified as amqp://localhost but everything works correctly when I use the --link parameter?
Here's the whole method that call is in, for additional context:
def run(self):
log.info('run')
self.__state = self.app.events.State()
with self.app.connection() as connection:
log.info('got a connection')
recv = self.app.events.Receiver(connection, handlers={
'task-received': self.registerTask,
'task-failed': self.retryTask,
'task-succeeded': self.deregisterTask,
# should process all events to have state up to date
'*': self.__state.event
})
log.info('received receiver')
# Capture until shutdown requested
while not self.__shutdown:
log.info('main run loop')
try:
recv.capture(limit=None, timeout=1, wakeup=True)
except timeout:
# timeout exception is fired when nothing occurs
# during timeout. Just ignore it.
pass
I found the issue: I had set CELERY_BROKER_URL in the Docker environment in which the container was running and this was causing the backend to attempt to connect to a hostname that did not exist. Once I un-set the variable, everything hooked up properly in my environment.
$ docker inspect server
<... removed ...>
"Env": [
"MESSAGE_QUEUE_SERVICE_SERVICE_HOST=192.168.99.100",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"PYTHON_VERSION=2.7.10",
"PYTHON_PIP_VERSION=7.1.2",
"CELERY_VERSION=3.1.18",
"CELERY_BROKER_URL=amqp://guest#rabbit"
],
<... removed ...>

docker compose, vagrant and insecure Repository

I have setup docker-compose to pull my image from a custom repository.
Here is how the yaml file looks like
my_service:
image: d-myrepo:5000/mycompany/my_service:latest
ports:
- "8079:8079"
Now if I run vagrant up, it gets errors
==> default: File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
==> default: docker.errors
==> default: .
==> default: DockerException
==> default: :
==> default: HTTPS endpoint unresponsive and insecure mode isn't enabled.
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
/usr/local/bin/docker-compose -f "/vagrant/docker-compose.yml" up -d
Stdout from the command:
Stderr from the command:
stdin: is not a tty
Creating vagrant_y2y_1...
Pulling image d-myrepo:5000/mycompany/my_service:latest...
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 31, in main
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 27, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 59, in perform_command
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 464, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.project", line 208, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 214, in recreate_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 197, in create_container
File "/code/build/docker-compose/out00-PYZ.pyz/docker.client", line 710, in pull
File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 67, in resolve_repository_name
File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
docker.errors.DockerException: HTTPS endpoint unresponsive and insecure mode isn't enabled.
I read about it on the internet, that it has to do with having an insecure repo.
It only works, only if I edit the file
/etc/default/docker
with content
DOCKER_OPTS="-r=true --insecure-registry d-myrepo:5000 ${DOCKER_OPTS}"
restart the docker service and manually pull the image. i.e.
docker pull d-myrepo:5000/mycompany/my_service:latest
Is there a way to avoid this error? and having the provisioning running smoothly? maybe I am missing an option inside the docker-composer.yml file?
thanks for your feedack, the best way to achieve this is to set the vagrant provisioning the following way
config.vm.provision :docker
config.vm.provision :docker_compose
config.vm.provision "shell", path: "provision.sh", privileged: false
while the shell script provision.sh would include the following relevant lines.
sudo echo "DOCKER_OPTS=\"-r=true --insecure-registry my_repo:5000 \${DOCKER_OPTS}\"" | sudo tee /etc/default/docker
sudo service docker restart
sudo /usr/local/bin/docker-compose -f /vagrant/docker-compose.yml up -d --allow-insecure-ssl

Resources