Odoo example with docker compose no work - docker

This is oficial Odoo file docker-compose example:
version: '2'
services:
web:
image: odoo:10.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
db:
image: postgres:9.4
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
When I run 'docker-compose up -d' output the next error:
ERROR: for test2_db_1 Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: for db Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: Encountered errors while bringing up the project.
The docker-compose.yml file is within test2 directory.
This is Odoo with Docker docs: https://hub.docker.com/_/odoo/
What can be happening?
Thanks!

Whenever you see errors related veth interfaces, it usually means that docker service has gone in some state where the network allocation doesn't work
ERROR: for test2_db_1 Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: for db Cannot start service db: failed to create endpoint test2_db_1 on network test2_default: failed to add the host (veth95f6516) <=> sandbox (veth4949623) pair interfaces: operation not supported
ERROR: Encountered errors while bringing up the project.
You should restart the docker service in such cases. If that doesn't help then restart the whole system

Related

host.docker.internal is failed to connect in Oracle VM

I have application installed in VM (centOS7) running on Oracle Virtual box with "Bridge Network" while communicating internal containers using : http://host.docker.internal:6001(other container port) it is throwing error saying #
request to http://host.docker.internal failed, reason: ETIMEDOUT 172.17.0.1:6001"
where as the same application while running in local I am able to connect to : http://host.docker.internal.
please help
my docker compose file
services:
configuration-service:
image: configuration-service:latest
restart: always
env_file: configuration-service.env
extra_hosts:
- "host.docker.internal:host-gateway"
dose-sheet-anonymizer-rs:
image: anonymizer-rs:latest
restart: always
ports:
- "6001:6001"
extra_hosts:
- "host.docker.internal:host-gateway"
env file:
anoymizer-host=http://host.docker.internal:6001

Azure Blob Storage error in Django - Failed to establish a new connection: [Errno 111]

I am setting up a local Azure Blob Storage using Docker container & Docker-compose.
However, when I start creating blob containers and uploading files it throws me the error as below.
azure.common.AzureException: HTTPConnectionPool(host='127.0.0.1', port=10000): Max retries exceeded with url: /devstoreaccount1/quickstartblobs?restype=container (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1068d0f748>: Failed to establish a new connection: [Errno 111] Connection refused',))
Here is my docker-compose:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- DEBUG=FALSE
- AZURE_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
- 5678:5678
depends_on:
- db
azurite:
image: mcr.microsoft.com/azure-storage/azurite
ports:
- "127.0.0.1:10000:10000"
Requirements.txt
djangorestframework==3.11.2
Django==3.1.8
Pygments==2.7.4
Markdown==3.2.1
coreapi==2.3.3
psycopg2-binary==2.8.4
dj-database-url==0.5.0
gunicorn==20.0.4
whitenoise==5.0.1
PyYAML==5.4
azure-storage-blob==2.1.0
ptvsd==4.3.2
azure-common==1.1.23
azure-storage-common==2.1.0
requests==2.25.1
six==1.11.0
urllib3==1.26.3
Code:
blob_service_client = BlockBlobService(
account_name='devstoreaccount1', account_key='Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==',is_emulated=True)
# Create a container called 'quickstartblobs'.
container_name = 'quickstartblobs'
blob_service_client.create_container(container_name)
You can remove the ports section for azurite service in your compose file and in your application provide the connection string and specify the blob endpoint (as mentioned here: https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azurite#connection-strings) as BlobEndpoint=http://azurite:10000
When you use docker local bridge (created for services where deployed using compose), container name if provided explicitly else the service name can be used to access the service.

Google Cloud Run health check fails with Docker-Compose

I am trying to upload my backend to Google Cloud Run. I'm using Docker-Compose with 2 components: a Golang Server and a Postgres DB.
When I run Docker-Compose locally, everything works great! When I upload to Gcloud with
gcloud builds submit . --tag gcr.io/BACKEND_NAME
gcloud run deploy --image gcr.io/BACKEND_NAME --platform managed
Gcloud's health check fails, getting stuck on Deploying... Revision deployment finished. Waiting for health check to begin. and throws Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I understand that Google Cloud Run provides a PORT env variable, which I tried to account for in my docker-compose.yml. But the command still fails. I'm out of ideas, what could be wrong here?
Here is my docker-compose.yml
version: '3'
services:
db:
image: postgres:latest # use latest official postgres version
container_name: db
restart: "always"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
api:
container_name: api
depends_on:
- db
restart: on-failure
build: .
ports:
# Bind GCR provided incoming PORT to port 8000 of our api
- "${PORT}:8000"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data: # named volumes can be managed easier using docker-compose
and the api container is a Golang binary, which waits for a connection to be made with the Postgres DB before calling http.ListenAndServe(":8000", handler).

Using docker to setup tendermint testnet and establishing communication between abci and tendermint core

I am trying to integrate my own ABCI-application with the localnet. The docker-compose looks as
version: '3'
services:
node0:
container_name: node0
image: "tendermint/localnode"
ports:
- "26656-26657:26656-26657"
environment:
- ID=0
- LOG=${LOG:-tendermint.log}
volumes:
- ./build:/tendermint:Z
command: node --proxy_app=tcp://abci0:26658
networks:
localnet:
ipv4_address: 192.167.10.2
abci0:
container_name: abci0
image: "abci-image"
volumes:
- $GOPATH/src/samplePOC:/go/src/samplePOC
ports:
- "26658:26658"
build:
context: .
dockerfile: $GOPATH/src/samplePOC/Dockerfile
command: /go/src/samplePOC/samplePOC
networks:
localnet:
ipv4_address: 192.167.10.6
Both the nodes and the abci- containers are built successfully. The ABCI server is started successfully and the nodes are trying to make connections. However the main problem is that the I see the two are not able to communicate with each other.
I get the following error:
node0 |E[2019-10-29|15:14:28.525] abci.socketClient failed to connect
to tcp://abci0:26658. Retrying... module=abci-client connection=query
err="dial tcp 192.167.10.6:26658: connect: connection refused"
Can someone please help me here?
My first thought is that you may need to add a depends_on: ["abci0"] to node0, as the ABCI application must be listening before Tendermint will try to connect.
Of course, TM should continue to retry so this may not be the issue.
Another thing you can try, is to run tendermint on your host machine, and attempt to connect to the exposed port of ABCI port on abci0 (26658) to isolate the problem to the docker configuration.
If you're not able to run tendermint node --proxy_app=tcp://localhost:26658 the problem likely lies in your ABCI application.
I assume you've initialized a directory in the volume you mount into node0?
I got this working with the kvstore example from Tendermint.
version: "3.4"
services:
kvstore-app:
image: alpine
expose:
- "26658"
volumes:
- ./kvstore-example:/home/dev/kvstore-example
command: "/home/dev/kvstore-example --socket-addr tcp://kvstore-app:26658"
tendermint-node:
image: tendermint/tendermint
depends_on:
- kvstore-app
ports:
- "26657:26657"
environment:
- TMHOME=/tmp/tendermint
volumes:
- ./tmp/tendermint:/tmp/tendermint
command: node --proxy_app=tcp://kvstore-app:26658
I'm not exactly sure why your docker-compose.yml isn't working, but it's likely that you are not binding the socket of your abci application in a way that is accessible to the node. I'm explicitly telling the abci application to do so with the argument --socket-addr tcp://kvstore-app:26658". Additionally, I'm just exposing the port of the abci application on the docker network, but I think mapping the port should do this implicitly.
Also I would get rid of all the network stuff. Personally, I use the network configuration only if I have some very specific network goals in mind.

Docker compose bitcoin service

I have a simple python service that sends a single command to a running bitcoin server. When I run a local bitcoin daemon everything works fine. However, when I try to run this using Docker I cannot connect this service to a bitcoin server in another docker image, like in this docker-compose:
version: '3'
services:
my_service:
build: .
volumes:
- .:/app
depends_on:
- bitcoind
links:
- bitcoind
working_dir: /app
bitcoind:
image: ruimarinho/bitcoin-core:0.15.0.1-alpine
command:
-printtoconsole
-regtest=1
-rest
-rpcallowip=10.211.0.0/16
-rpcallowip=172.17.0.0/16
-rpcallowip=192.168.0.0/16
-rpcpassword=bar
-rpcport=18333
-rpcuser=foo
-server
ports:
- 18333:18333
volumes:
bitcoin_data:
I keep getting the following error:
ConnectionError: HTTPConnectionPool(host='bitcoind', port=18333): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7faded979310>: Failed to establish a new connection: [Errno -2] Name or service not known',))
Any ideas?
You must open the container port 18333. With the docker compose, you can use the command 'expose' to do it.

Resources