I have a simple python service that sends a single command to a running bitcoin server. When I run a local bitcoin daemon everything works fine. However, when I try to run this using Docker I cannot connect this service to a bitcoin server in another docker image, like in this docker-compose:
version: '3'
services:
my_service:
build: .
volumes:
- .:/app
depends_on:
- bitcoind
links:
- bitcoind
working_dir: /app
bitcoind:
image: ruimarinho/bitcoin-core:0.15.0.1-alpine
command:
-printtoconsole
-regtest=1
-rest
-rpcallowip=10.211.0.0/16
-rpcallowip=172.17.0.0/16
-rpcallowip=192.168.0.0/16
-rpcpassword=bar
-rpcport=18333
-rpcuser=foo
-server
ports:
- 18333:18333
volumes:
bitcoin_data:
I keep getting the following error:
ConnectionError: HTTPConnectionPool(host='bitcoind', port=18333): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7faded979310>: Failed to establish a new connection: [Errno -2] Name or service not known',))
Any ideas?
You must open the container port 18333. With the docker compose, you can use the command 'expose' to do it.
Related
I created an API using node.js
I converted it to an image by running the command docker build -t droneapi:1.0
using this Dockerfile.
FROM node:19-alpine
ENV MONGO_DB_USERNAME = admin \
MONGO_DB_PWD=password
RUN mkdir -p /home/droneAPI
COPY . /Users/styles/Programming/DroneAPI2
CMD ["node", "/Users/styles/Programming/DroneAPI2/Drones/server.js"]
I ran docker run droneapi:1.0 to create a container to talk to my mongodb container but I received the error: getaddrinfo ENOTFOUND mongodb
I’m using mongoose to try and communicate with the db
onst connectDB = async () => {
try{
const conn = await mongoose.connect("mongodb://admin:password#mongodb:27017", {dbName: 'drobedb'})
console.log(`MongoDB Connected: ${conn.connection.host}`.cyan.underline)
}catch (error){
console.log(`Error: ${error.message}`.red.underline.bold)
process.exit(1)
}
}
I have tried to replace the 'mongodb' in the connection string with localhost and I receive Error: connect ECONNREFUSED 127.0.0.1:27017
Here is my mongo.yaml file
version: '3'
services:
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongo-express:
image: mongo-express
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
I'm new to docker so please any assistance will be appreciated
When we start a container stack through a compose-file, a network for this compose stack is created for us, and all containers are attached to this network. When we start a container through docker run ..., it is attached to the default network. Containers in different networks cannot communicate with each other. My recommendation would be to add the dronapi-container to the compose file:
version: '3'
services:
...
drone-api:
bulid:
context: .
dockerfile: path/to/Dockerfile
...
depends_on:
- mongodb # to start this container after mongodb has been started
If we want to start the stack, we can run docker compose up -d. Notice that if the image was built before, it will not be automatically rebuilt. To rebuild the image, we can run docker compose up --build -d.
As an aside: I would recommend following the 12 factors for cloud-native applications (12factors.net). In particular, I would recommend to externalize the database configuration (12factors.net).
I am trying to connect to an geth node in a docker container via RPC over HTTP. It works fine when I connect from the host and use the URL http://localhost:8545 with the Web3.HTTPProvider instance in a Python web application; however when I try to connect from inside another container using the docker-compose name of the service for the server (http://ethereum:8545), I get the follwing error message:
docker-compose-web-1 | requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://ethereum:8545
My docker-compose.yml is the following:
services:
web:
image: cramananjaona/minimalapp:0.0.1
ports:
- "8088:8088"
links:
- ethereum
ethereum:
image: cramananjaona/tabouret:0.0.1
ports:
- 8545:8545
- 30303:30303
The Dockerfile for ethereum is
FROM alpine:latest
RUN apk update
RUN apk add geth
COPY node02 node02
CMD ["geth", "--datadir", "node02", "--port", "30303", "--nodiscover", "--http", "--http.api", "eth,net,web3,admin", "--http.addr", "0.0.0.0", "--networkid", "1900"]
Is there a particular option which need to be passed to docker or geth to make this work?
I am setting up a local Azure Blob Storage using Docker container & Docker-compose.
However, when I start creating blob containers and uploading files it throws me the error as below.
azure.common.AzureException: HTTPConnectionPool(host='127.0.0.1', port=10000): Max retries exceeded with url: /devstoreaccount1/quickstartblobs?restype=container (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1068d0f748>: Failed to establish a new connection: [Errno 111] Connection refused',))
Here is my docker-compose:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- DEBUG=FALSE
- AZURE_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
- 5678:5678
depends_on:
- db
azurite:
image: mcr.microsoft.com/azure-storage/azurite
ports:
- "127.0.0.1:10000:10000"
Requirements.txt
djangorestframework==3.11.2
Django==3.1.8
Pygments==2.7.4
Markdown==3.2.1
coreapi==2.3.3
psycopg2-binary==2.8.4
dj-database-url==0.5.0
gunicorn==20.0.4
whitenoise==5.0.1
PyYAML==5.4
azure-storage-blob==2.1.0
ptvsd==4.3.2
azure-common==1.1.23
azure-storage-common==2.1.0
requests==2.25.1
six==1.11.0
urllib3==1.26.3
Code:
blob_service_client = BlockBlobService(
account_name='devstoreaccount1', account_key='Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==',is_emulated=True)
# Create a container called 'quickstartblobs'.
container_name = 'quickstartblobs'
blob_service_client.create_container(container_name)
You can remove the ports section for azurite service in your compose file and in your application provide the connection string and specify the blob endpoint (as mentioned here: https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azurite#connection-strings) as BlobEndpoint=http://azurite:10000
When you use docker local bridge (created for services where deployed using compose), container name if provided explicitly else the service name can be used to access the service.
I have created a docker-compose file it has two services with Go and Mysql. It creates container for go and mysql. Now i am running code which try to connect to mysql database which is running as a docker container. but i get error.
docker-compose.yml
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
Error while connecting to mysql database
golang | 2019/02/28 11:33:05 dial tcp 127.0.0.1:3306: connect: connection refused
golang | 2019/02/28 11:33:05 http: panic serving 172.24.0.1:49066: dial tcp 127.0.0.1:3306: connect: connection refused
golang | goroutine 19 [running]:
Connection with MySql Database
func DB() *gorm.DB {
db, err := gorm.Open("mysql", "root:root#tcp(mysql:3306)/testDB?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Panic(err)
}
log.Println("Connection Established")
return db
}
EDIT:Updated docker file
FROM golang:latest
RUN go get -u github.com/gorilla/mux
RUN go get -u github.com/jinzhu/gorm
RUN go get -u github.com/go-sql-driver/mysql
COPY ./wait-for-it.sh .
RUN chmod +x /wait-for-it.sh
WORKDIR /go/src/app
ADD . src
EXPOSE 8800
CMD ["go", "run", "src/main.go"]
I am using gorm package which lets me connet to the database
depends_on is not a verification that MySQL is actually ready to receive connections. It will start the second container once the database container is running regardless it was ready for connections or not which could lead to such an issue with your application as it expects the database to be ready which might not be true.
Quoted from the documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started.
There are many tools/scripts that can be used to solve this issue like wait-for which sh compatible in case your image based on Alpine for example (You can use wait-for-it if you have bash in your image)
All you have to do is to add the script to your image through Dockerfile then use this command in docker-compose.yml for the service that you want to make it wait for the database.
What comes after -- is the command that you would normally use to start your application
version: "2"
services:
app:
container_name: golang
...
command: ["./wait-for", "mysql:3306", "--", "go", "run", "myapplication"]
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
...
I have removed some parts from the docker-compose for easier readability.
Modify this part go run myapplication with the CMD of your golang image.
See Controlling startup order for more on this problem and strategies for solving it.
Another issue that will rise after you solve the connection issue will be as the following:
Setting MYSQL_USER with root value will cause a failure in MySQL with this error message:
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
This is because this user already exist in the database and it tries to create another. if you need to use the root user itself you can use only this variable MYSQL_ROOT_PASSWORD or change the value of MYSQL_USER so you can securely use it in your application instead of the root user.
Update: In case you are getting not found and the path was correct, you might need to write the command as below:
command: sh -c "./wait-for mysql:3306 -- go run myapplication"
First, if you are using latest version of docker compose you don't need the link argument in you app service. I quote the docker compose documentation Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, https://docs.docker.com/compose/compose-file/#links
I think the solution is to use the networks argument. This create a docker network and add each service to it.
Try this
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
networks:
- my_network
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
networks:
- my_network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
networks:
my_network:
driver: bridge
By the way, if you only connect to Mysql from your app service you don't need to expose the mysql port. If the containers runs in the same network they can reach all ports inside this network.
If my example doesn't works try this
run the docker compose and next go into the app container using
docker container exec -it CONTAINER_NAME bash
Install ping in order to test connection and then run ping mysql.
I am trying to connect to a cassandra container from a separate container (named main).
This is my docker-compose.yml
version: '3.2'
services:
main:
build:
context: .
image: main-container:latest
depends_on:
- cassandra
links:
- cassandra
stdin_open: true
tty: true
cassandra:
build:
context: .
dockerfile: Dockerfile-cassandra
ports:
- "9042:9042"
- "9160:9160"
image: "customer-core-cassandra:latest"
Once I run this using docker-compose up, I run this command:
docker-compose exec main cqlsh cassandra 9042
but I get this error:
Connection error: ('Unable to connect to any servers', {'172.18.0.2': error(111, "Tried connecting to [('172.18.0.2', 9042)]. Last error: Connection refused")})
I figured out the answer. Basically, in the cassandra.yaml file it sets the default rpc_address to localhost. If this is the case, Cassandra will only listen for requests on localhost, and will not allow connections from anywhere else. In order to change this, I had to set rpc_address to my "cassandra" container so my main container (and any other containers) could access Cassandra using the cassandra container ip address.
rpc_address: cassandra