no data exchanged between IBpy2 and IBGateway - docker

I am using backtrader as client with IBpy2 to access my IBC controlled IBGateway running on Docker.
I'm facing the issue that my system starts and just hangs there, with no errors or printed debug info.
I debugged my way as far as this line, reading:
self.m_serverVersion = self.m_reader.readInt()
Which is waiting to receive the server version through the connection, which never arrives.
I get this only when the IBGateway runs through docker, I don't understand how it's possible that IBpy can establish a connection but cannot exchange data.
I could not pinpoint where the problem might be, the fact that it happens only when IBC is on docker compose suggests that this depends on Docker compose, here's my docker-compose.yml file
--- updated: ---
version: '3.7'
services:
trader:
build: ./
image: mytrader
container_name: mytrader
networks:
- trading
depends_on:
- tws
tws:
build: ./ib-docker
image: ibconnect
container_name: ibconnect
ports:
# - "4001:4001"
- "4003:4003"
- "5901:5901"
volumes:
- ./ib-docker/config.ini:/root/ibc/config.ini
# - ./ib-docker/twsstart.sh:/opt/ibc/twsstart.sh
- ./ib-docker/gatewaystart.sh:/opt/ibc/gatewaystart.sh
environment:
- TZ=UTC
# Variables pulled from /root/IBController/IBControllerGatewayStart.sh
- VNC_PASSWORD=password
- IBC_PATH=/opt/ibc
- LOG_PATH=/root/ibc/logs
env_file:
- tws_credentials.env
networks:
- trading
networks:
trading:
driver: bridge
and the list of networks
% docker network ls
NETWORK ID NAME DRIVER SCOPE
4ad25f1cf0f4 bridge bridge local
9ca6f0e3f509 giuliotrader_default bridge local
3afbca83e020 giuliotrader_trading bridge local
73c2590a3a11 host host local
34e58c19f5e3 none null local
happy to post any additional files or info as might be needed.
Thanks,

Good afternoon, maybe you should use link from trader to tws
services:
trader:
links:
- tws
build: ./
image: mytrader
container_name: mytrader

Related

How to get redis address from docker compose?

I'm trying to pass redis url to docker container but so far i couldn't get it to work. I did a little research and none of the answers worked for me.
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
container_name: redis
hostname: redis
expose:
- 6379
links:
- api
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
environment:
- REDIS_URL=redis
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=proxy'
networks:
proxy:
Error: Redis connection to redis failed - connect ENOENT redis
You can only communicate between containers on the same Docker network. Docker Compose creates a default network for you, and absent any specific declaration your redis container is on that network. But you also declare a separate proxy network, and only attach the api container to that other network.
The single simplest solution to this is to delete all of the network: blocks everywhere and just use the default network Docker Compose creates for you. You may need to format the REDIS_URL variable as an actual URL, maybe like redis://redis:6379.
If you have a non-technical requirement to have separate networks, add - default to the networks listing for the api container.
You have a number of other settings in your docker-compose.yml that aren't especially useful. expose: does almost nothing at all, and is usually also provided in a Dockerfile. links: is an outdated way to make cross-container calls, and as you've declared it to make calls from Redis to your API server. hostname: has no effect outside the container itself and is usually totally unnecessary. container_name: does have some visible effects, but usually the container name Docker Compose picks is just fine.
This would leave you with:
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=default'

Setting up IPFS Cluster on docker environment

I am trying to set up a 2 node private IPFS cluster using docker. For that purpose I am using ipfs/ipfs-cluster:latest image.
My docker-compose file looks like :
version: '3'
services:
peer-1:
image: ipfs/ipfs-cluster:latest
ports:
- 8080:8080
- 4001:4001
- 5001:5001
volumes:
- ./cluster/peer1/config:/data/ipfs-cluster
peer-2:
image: ipfs/ipfs-cluster:latest
ports:
- 8081:8080
- 4002:4001
- 5002:5001
volumes:
- ./cluster/peer2/config:/data/ipfs-cluster
While starting the containers getting following error
ERROR ipfshttp: error posting to IPFS: Post http://127.0.0.1:5001/api/v0/repo/stat?size-only=true: dial tcp 127.0.0.1:5001: connect: connection refused ipfshttp.go:745
Please help with the problem.
Is there any proper documentation about how to setup a IPFS cluster on docker. This document misses on lot of details.
Thank you.
I figured out how to run a multi-node IPFS cluster on docker environment.
The current ipfs/ipfs-cluster which is version 0.4.17 doesn't run ipfs peer i.e. ipfs/go-ipfs in it. We need to run it separately.
So now in order to run a multi-node (2 node in this case) IPSF cluster in docker environment we need to run 2 IPFS peer container and 2 IPFS cluster container 1 corresponding to each peer.
So your docker-compose file will look as follows :
version: '3'
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
services:
ipfs0:
container_name: ipfs0
image: ipfs/go-ipfs
ports:
- "4001:4001"
- "5001:5001"
- "8081:8080"
volumes:
- ./var/ipfs0-docker-data:/data/ipfs/
- ./var/ipfs0-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.5
ipfs1:
container_name: ipfs1
image: ipfs/go-ipfs
ports:
- "4101:4001"
- "5101:5001"
- "8181:8080"
volumes:
- ./var/ipfs1-docker-data:/data/ipfs/
- ./var/ipfs1-docker-staging:/export
networks:
vpcbr:
ipv4_address: 10.5.0.7
ipfs-cluster0:
container_name: ipfs-cluster0
image: ipfs/ipfs-cluster
depends_on:
- ipfs0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.5/tcp/5001
ports:
- "9094:9094"
- "9095:9095"
- "9096:9096"
volumes:
- ./var/ipfs-cluster0:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.6
ipfs-cluster1:
container_name: ipfs-cluster1
image: ipfs/ipfs-cluster
depends_on:
- ipfs1
- ipfs-cluster0
environment:
CLUSTER_SECRET: 1aebe6d1ff52d96241e00d1abbd1be0743e3ccd0e3f8a05e3c8dd2bbbddb7b93
IPFS_API: /ip4/10.5.0.7/tcp/5001
ports:
- "9194:9094"
- "9195:9095"
- "9196:9096"
volumes:
- ./var/ipfs-cluster1:/data/ipfs-cluster/
networks:
vpcbr:
ipv4_address: 10.5.0.8
This will spin 2 peer IPFS cluster and we can store and retrieve file using any of the peer.
The catch here is we need to provide the IPFS_API to ipfs-cluster as environment variable so that the ipfs-cluster knows its corresponding peer. And for both the ipfs-cluster we need to have the same CLUSTER_SECRET.
According to the article you posted:
The container does not run go-ipfs. You should run the IPFS daemon
separetly, for example, using the ipfs/go-ipfs Docker container. We
recommend mounting the /data/ipfs-cluster folder to provide a custom,
working configuration, as well as persistency for the cluster data.
This is usually achieved by passing -v
:/data/ipfs-cluster to docker run).
If in fact you need to connect to another service within the docker-compose, you can simply refer to it by the service name, since hostname entries are created in all the containers in the docker-compose so services can talk to each other by name instead of ip
Additionally:
Unless you run docker with --net=host, you will need to set $IPFS_API
or make sure the configuration has the correct node_multiaddress.
The equivalent of --net=host in docker-compose is network_mode: "host" (incompatible with port-mapping) https://docs.docker.com/compose/compose-file/#network_mode

Docker hostnames are not resolved in a custom network

I have the following configuration in my docker-composer.yml file.
version: '3.3'
services:
service-1:
container_name: 'service-1'
build: './service-1'
depends_on:
- 'mongo'
- 'consul'
networks:
backend:
aliases:
- service-1
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
frontend:
backend:
aliases:
- service-2
depends_on:
- 'mongo'
- 'consul'
consul:
image: 'consul:latest'
networks:
backend:
aliases:
- consul
mongo:
image: 'mongo:latest'
networks:
backend:
aliases:
- mongo
networks:
frontend:
backend:
internal: true
When my containers start they are not able to communicate between each other using host names.
Most of containers use the mongo db container, but they are not able even reach it and I am getting the following error.
Error connecting to mongo : no reachable servers
Please help me to solve the problem, I got stuck.
Thanks.
You've got a lot of unneeded settings in the compose file, here's a stripped down version that would work just as well:
version: '3.3'
services:
service-1:
build: './service-1'
networks:
- backend
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
- frontend
- backend
consul:
image: 'consul:latest'
networks:
- backend
mongo:
image: 'mongo:latest'
networks:
- backend
networks:
frontend:
backend:
internal: true
You automatically get the alias of the service name for each container, no need to duplicate that. You also lose the ability to scale a service if you give it a container name. I'd also recommend moving the build step out of the compose file and use an image name for the apps you're building locally.
Now for the likely issue, you have a depends_on in your compose file. At best, this will not do what you're looking for. All it checks that the other container has been created and started, but not that the application inside is ready to serve traffic, and a DB may take time to become available. At worst, you'll get an error that it's unsupported if you try to move this into swarm mode.
Instead of depending on docker for this, update your application entrypoint to check for the external dependencies and wait a minute or two for them to become available before failing. A very simple example tool for this is wait-for-it that is written as a bash shell script.

docker-compose: difference between networks and links

I'm learning docker. I see those two terms that make me confused. For example here is a docker-compose that defined two services redis and web-app.
services:
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
networks:
- lognet
app:
container_name: web-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ".:/webapp"
links:
- redis
networks:
- lognet
networks:
lognet:
driver: bridge
This docker-compose file defines a bridge network named lognet and all services will connect to this network. As I understand, this action makes those services see others. So why app service still needs to link to redis service in the above case?
Thanks
Links have been replaced by networks. Docker describes them as a legacy feature that you should avoid using. You can safely remove the link and the two containers will be able to refer to each other by their service name (or container_name).
With compose, links do have a side effect of creating an implied dependency. You should replace this with a more explicit depends_on section so that the app doesn't attempt to run without or before redis starts.
As an aside, I'm not a fan of hard coding container_name unless you are certain that this is the only container that will exist with that name on the host and you need to refer to it from the docker cli by name. Without the container name, docker-compose will give it a less intuitive name, but it will also give it an alias of redis on the network, which is exactly what you need for container to container networking. So the end result with these suggestions is:
version: '2'
# do not forget the version line, this file syntax is invalid without it
services:
redis:
image: redis:latest
ports:
- "6379:6379"
networks:
- lognet
app:
container_name: web-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ".:/webapp"
depends_on:
- redis
networks:
- lognet
networks:
lognet:
driver: bridge

Docker-compose external_links not able to connect

I have a couple of app containers that I want to connect to the mongodb container. I tried with external_links but I can not connect to the mongodb.
I get
MongoError: failed to connect to server [mongodb:27017] on first
connect
Do I have to add the containers into the same network to get external_links working?
MongoDB:
version: '2'
services:
mongodb:
image: mongo:3.4
restart: always
ports:
- "27017:27017"
volumes:
- data:/data/db
volumes:
data:
App:
version: '2'
services:
app-dev:
restart: Always
build: repository/
ports:
- "3000:80"
env_file:
- ./environment.env
external_links:
- mongodb_mongodb_1:mongodb
Networks:
# sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
29f8bae3e136 bridge bridge local
67d5519cb2e6 dev_default bridge local
9e7097c844cf host host local
481ee4301f7c mongodb_default bridge local
4275508449f6 none null local
873a46298cd9 prod_default bridge local
Documentation at https://docs.docker.com/compose/compose-file/#/externallinks says
If you’re using the version 2 file format, the externally-created containers must be connected to at least one of the same networks as the service which is linking to them.
Ex:
Create a new docker network
docker network create -d bridge custom
docker-compose-1.yml
version: '2'
services:
postgres:
image: postgres:latest
ports:
- 5432:5432
networks:
- custom
networks:
custom:
external: true
docker-compose-2.yml
version: '2'
services:
app:
image: training/webapp
networks:
- custom
external_links:
- postgres:postgres
networks:
custom:
external: true
Yuva's answer above for the version 2 holds good for version 3 as well.
The documentation for the external_links isn't clear enough.
For more clarity I pasted the version 3 variation with annotation
version: '3'
services:
app:
image: training/webapp
networks:
- <<network created by other compose file>>
external_links:
- postgres:postgres
networks:
<<network created by other compose file>>:
external: true
Recently I faced Name resolution failure trying to link 2 containers handled by docker-compose v3 representing gRPC server and client in my case, but failed and with external_links.
I'll probably duplicate some of the info posted here, but will try to summarize
as all these helped me solving the issue.
From external_links docs (as mentioned in earlier answer):
If you’re using the version 2 or above file format, the externally-created containers must be connected to at least one of the same networks as the service that is linking to them.
The following configuration solved the issue.
project-grpc-server/docker-compose.yml
version: '3'
services:
app:
networks:
- some-network
networks:
some-network:
Server container configured as expected.
project-grpc-client/docker-compose.yml
services:
app:
external_links:
# Assigning easy alias to the target container
- project-grpc-server_app_1:server
networks:
# Mentioning current container as a part of target network
- project-grpc-server_some-network
networks:
# Announcing target network (where server resides)
project-grpc-server_some-network:
# Telling that announced network already exists (shouldn't be created but used)
external: true
When using defaults (no container_name configured) the trick with configuring client container is in prefixes. In my case network name had prefix project-grpc-server_ when working with docker-compose and than goes the name itself some-network (project-grpc-server_some-network). So fully qualified network names should be passed when dealing with separate builds.
While container name is obvious as it appears from time to time on the screen the full network name is not easy-to-guess candidate when first facing this section of Docker, unless docker network ls.
I'm not a Docker expert, so please don't judge too strict if all this is obvious and essential in Docker world.

Resources