I am running a neo4j graph database inside a docker container. I've written another service in Go that should be able to execute queries from its respective container. I cannot however get the connection between those two containers established.
the dockerfile of my database:
version: "3"
services:
neo4j-db:
image: neo4j:latest
ports:
- "7474:7474"
- "7473:7473"
- "7687:7687"
expose:
- 7474
networks:
app_net:
ipv4_address: 172.18.18.10
volumes:
- //C/Users/<user>/Desktop/neoj4/4.0/config:/conf
networks:
app_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.18.18.0/24
My neo4j.conf:
dbms.connector.https.advertised_address=localhost:7473
dbms.default_listen_address=0.0.0.0
dbms.connector.http.advertised_address=localhost:7474
dbms.memory.pagecache.size=512M
dbms.connector.bolt.advertised_address=127.18.18.10:7687
dbms.tx_log.rotation.retention_policy=100M size
dbms.directories.logs=/logs
And finally inside my Go container:
uri := "bolt://127.18.18.10:7687"
username := "neo4j"
password := "test"
var (
err error
driver neo4j.Driver
session neo4j.Session
result neo4j.Result
greeting interface{}
)
fmt.Println("Connecting to Neo4j")
driver, err = neo4j.NewDriver(uri, neo4j.BasicAuth(username, password, ""), useConsoleLogger(neo4j.ERROR))
if err != nil {
fmt.Println("ERROR:" , err)
}
defer driver.Close()
fmt.Println("Getting Session")
session, err = driver.Session(neo4j.AccessModeWrite)
if err != nil {
fmt.Println("ERROR:" , err)
}
defer session.Close()
When calling the function the execution gets stuck after fmt.Println("Getting Session") without throwing any errors for 30 seconds and then simply terminates.
I also made sure to have both containers on the same network (app_net). I can ping between the containers without issue. However, trying telnet from the go-container to neo4j yields Unable to connect to remote host: Connection refused.
I'm not sure what I'm doing wrong. Browser access on neo4j works and from what I see the containers are on the same network.
Any advice or ideas are greatly appreciated.
After spending some additional time, I've managed to get it working. I took the following steps:
Use the container's hostname as uri (i.e. "bolt://container_name").
Remove encryption to prevent a TLS error:
if driver, err = neo4j.NewDriver(uri, neo4j.BasicAuth(username, password, ""), func(config *neo4j.Config) {
config.Log = neo4j.ConsoleLogger(neo4j.ERROR)
config.Encrypted = false
}); err != nil {
return err
}
defer driver.Close()
Related
I use open telemetry for tracing and metrics. I have a pretty standard setup - there is a service that produces metrics/traces and an open telemetry sidecar that collects these metrics and pushes them to AWS:
services:
service:
build:
context: .
image: service
container_name: service
ports:
- "3000:3000"
depends_on:
- aws-otel-collector
aws-otel-collector:
image: public.ecr.aws/aws-observability/aws-otel-collector:latest
container_name: aws-otel-collector
ports:
- "4317:4317"
Service flushes metrics and shuts down an exporter on service shutdown:
shutdown, err := initMetricProvider(ctx)
if err != nil {
log.Fatal(err)
}
defer func() {
log.Printf("Shutting down metric provider")
if err := shutdown(ctx); err != nil {
log.Fatal(fmt.Errorf("failed to shutdown metric provider: %w", err))
}
}()
meter := global.MeterProvider().Meter("service")
counter, err := meter.SyncInt64().Counter("test")
From time to time I am getting errors during restarts, caused by an inability to push metrics on a shutdown, smth like:
max retry time elapsed: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:4317: connect: connection refused"
This happens because the otel collector sidecar is getting suspended before metrics are flushed in service.
Question: how does one guarantees the sidecar waits until metrics are flushed? is there a way to have a delay on sidecar shutdown? (it didn't manage to find this in otel documentation)
I have 2 Docker containers running in the same network and I want 1 of them to call another via spring Webclient.
I'm sure they all are in the same network -> docker network inspect <network_ID> proves this.
AFAIK I can ping one container from another to check if they can talk to each other by docker exec -ti attachment-loader-prim ping attachment-loader-sec
If I run this - I see responses from attachment-loader-sec like 64 bytes from 172.21.0.5: seq=0 ttl=64 time=0.220 ms, which means they can communicate.
When I send Postman request to attachment-loader-prim by its exposed port localhost:8085, I expect that after some business logic it calls for attachment-loader-sec via Webclient, but on that step I get a 500 error with such a message:
"finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80; nested exception is
io.netty.channel.AbstractChannel$AnnotatedConnectException:
finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80"
Both attachment-loader-prim and attachment-loader-sec can be accessed separately via postman and both send a response, no problem.
This is my docker-compose:
version: '3'
services:
attachment-loader-prim:
container_name: attachment-loader-prim
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8085
networks:
- loader_network
expose:
- 8085
ports:
- 8005:8005
- 8085:8085
attachment-loader-sec:
container_name: attachment-loader-sec
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8086
networks:
- loader_network
expose:
- 8086
ports:
- 8006:8005
- 8086:8086
networks:
loader_network:
driver: bridge
And this is a Webclient which makes a call:
class RemoteServiceCaller(private val fetcherWebClientBuilder: WebClient.Builder) {
suspend fun getAttachmentsFromRemote(id: String, params: List<Param>, username: String): Result? {
val client = fetcherWebClientBuilder.build()
val awaitExchange = client.post()
.uri("/{id}/attachment", id)
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(params)
.header(usernameHeader, username)
.accept(MediaType.APPLICATION_OCTET_STREAM)
.awaitExchange {
if (it.statusCode().is2xxSuccessful) {
handleSucessCode(it)
} else it.createExceptionAndAwait().run {
LOG.error(this.responseBodyAsString, this)
throw ProcessingException(this)
}
}
return awaitExchange
}
private suspend fun handleSucessCode(response: ClientResponse) {
// some not important logic
}
}
P.S. BasicUri for Webclient defined as Config Bean like http://attachment-loader-sec/list
All my investigations brought me to such problems as:
Calling container using localhost instead of container name
Containers are not in the same network.
All that seems not relevant for me.
Any ideas will be really appreciated.
The problem was in calling a service without its port. The url became now http://attachment-loader-sec:8086/list and it is correct now. In my case I get 404, which means that my url path is not quite correct, but that is outside of current question
I am trying out Dapr for the first time ....refering to the Dapr go sdk at https://github.com/dapr/go-sdk...
... trying to host a Dapr service using golang with Docker Compose on my Windows 10 machine - using VSCode - and running into an issue connecting to ther service.
I have the docker compose file set to do a simple configuration as follows. And trying to connect to the service via the Dapr API using curl
golang service (taskapi service) => Dapr SideCar (taskapidapr)
I based it off of the example from https://github.com/dapr/go-sdk/blob/main/example/Makefile, but using Docker Compose.
When I try to connect connect to the service using
curl -d "ping" -H "Content-type: text/plain;charset=UTF-8"
"http://localhost:8300/v1.0/invoke/taskapi/method/echo"
I am running into the following error.
{"errorCode":"ERR_DIRECT_INVOKE","message":"invoke API is not ready"}
And the Dapr logs in Docker show a 'no mDNS apps to refresh.' - not sure if this is the cause of it and how to handle it.
Anyone can point me to what I am missing - greatly appreciate it.
Thank you
Athadu
golang package
package main
import (
"context"
"errors"
"fmt"
"log"
"net/http"
"github.com/dapr/go-sdk/service/common"
daprd "github.com/dapr/go-sdk/service/http"
)
func main() {
port := "8085"
address := fmt.Sprintf(":%s", port)
log.Printf("Creating New service at %v port", address)
log.Println()
// create a Dapr service (e.g. ":8080", "0.0.0.0:8080", "10.1.1.1:8080" )
s := daprd.NewService(address)
// add a service to service invocation handler
if err := s.AddServiceInvocationHandler("/echo", echoHandler); err != nil {
log.Fatalf("error adding invocation handler: %v", err)
}
if err := s.Start(); err != nil && err != http.ErrServerClosed {
log.Fatalf("error listenning: %v", err)
}
}
func echoHandler(ctx context.Context, in *common.InvocationEvent) (out *common.Content, err error) {
if in == nil {
err = errors.New("invocation parameter required")
return
}
log.Printf(
"echo - ContentType:%s, Verb:%s, QueryString:%s, %s",
in.ContentType, in.Verb, in.QueryString, in.Data,
)
out = &common.Content{
Data: in.Data,
ContentType: in.ContentType,
DataTypeURL: in.DataTypeURL,
}
return
}
docker-compose.yml
version: "3"
services:
taskapi:
image: golang:1.16
volumes:
- ..:/go/src/lekha
working_dir: /go/src/lekha/uploader
command: go run main.go
ports:
- "8085:8085"
environment:
aaa: 80
my: I am THE variable value
networks:
- lekha
taskapidapr:
image: "daprio/daprd:edge"
command: [
"./daprd",
"-app-id", "taskapi",
"-app-protocol", "http",
"-app-port", "8085",
"-dapr-http-port", "8300",
"-placement-host-address", "placement:50006",
"-log-level", "debug",
"-components-path", "/components"
]
volumes:
- "../dapr-components/:/components" # Mount our components folder for the dapr runtime to use
depends_on:
- taskapi
ports:
- "8300:8300"
networks:
- lekha
#network_mode: "service:taskapi" # Attach the task-api-dapr service to the task-api network namespace
############################
# Dapr placement service
############################
placement:
image: "daprio/dapr"
command: ["./placement", "-port", "50006"]
ports:
- "50006:50006"
networks:
- lekha
networks:
lekha:
Daprd shows these mDNS messages in logs - not sure if this is the cause
time="2021-05-24T01:06:13.6629303Z" level=debug msg="Refreshing all
mDNS addresses." app_id=taskapi instance=442e04c9e8a6
scope=dapr.contrib type=log ver=edge
time="2021-05-24T01:06:13.6630421Z" level=debug msg="no mDNS apps to
refresh." app_id=taskapi instance=442e04c9e8a6 scope=dapr.contrib
Additionally, I see the containers on the expected ports ... running fine in Docker desktop...
enter image description here
{
"errorCode": "ERR_DIRECT_INVOKE",
"message": "invoke API is not ready"
}
same as yours
Go ES client https://godoc.org/gopkg.in/olivere/elastic.v6 throws the "no active connection found: no Elasticsearch node available" error when attempting to connect from OS X host to ES running in docker container.
There's many discussions on how to solve it in v5.*, however, I couldn't find anything for v6.4.
Docker-compose part:
elasticsearch:
image: elasticsearch:6.4.2
network_mode: "bridge"
expose:
- "9200"
- "9300"
volumes:
- ./es-data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
Go client call:
esClient, esClientErr :=
elastic.NewClient(elastic.SetURL("http://127.0.0.1:9200"))
if esClientErr != nil {
return nil, fmt.Errorf("Failed to connect to ES: %v", esClientErr)
}
Output:
2018/11/09 15:57:54 Failed to connect to ES: no active connection found: no Elasticsearch node available
exit status 1
UPDATE
Setting network.publish_host: "_local_" solved the problem. The publish_address is set to 127.0.0.1:9300 now.
I am building a java spring mvc application in docker and dockefile build involves interacting with postgres container. Whenever i run docker-compose up the step in dockerfile which interacts with the postrges sometimes fails with an exception
psql: could not translate host name "somePostgres" to address: Name or service not known
FAILED
FAILURE: Build failed with an exception.
DockerCompose file:
abcdweb:
links:
- abcdpostgres
build: .
ports:
- "8080:8080"
volumes:
- .:/abcd-myproj
container_name: someWeb
abcdpostgres:
image: postgres
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
container_name: somePostgres
The somePostgres seems to start very quickly and There is no late loading of postgres container problem. Currently i am running this in virtual box created by docker-machine. Unable to get error as it's not persistent.
PS: Added Dockerfile
FROM java:7
RUN apt-get update && apt-get install -y postgresql-client-9.4
ADD . ./abcd-myproj
WORKDIR /abcd-myproj
RUN ./gradlew build -x test
RUN sh db/importdata.sh
CMD ./gradlew jettyRun
Basically what this error means is that psql was unable to resolve the host name, try using the ip address instead.
https://github.com/postgres/postgres/blob/313f56ce2d1b9dfd3483e4f39611baa27852835a/src/interfaces/libpq/fe-connect.c#L2275-L2285
case CHT_HOST_NAME:
ret = pg_getaddrinfo_all(ch->host, portstr, &hint,
&conn->addrlist);
if (ret || !conn->addrlist)
{
appendPQExpBuffer(&conn->errorMessage,
libpq_gettext("could not translate host name \"%s\" to address: %s\n"),
ch->host, gai_strerror(ret));
goto keep_going;
}
break;
https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/common/ip.c#L57-L75
int
pg_getaddrinfo_all(const char *hostname, const char *servname,
const struct addrinfo *hintp, struct addrinfo **result)
{
int rc;
/* not all versions of getaddrinfo() zero *result on failure */
*result = NULL;
#ifdef HAVE_UNIX_SOCKETS
if (hintp->ai_family == AF_UNIX)
return getaddrinfo_unix(servname, hintp, result);
#endif
/* NULL has special meaning to getaddrinfo(). */
rc = getaddrinfo((!hostname || hostname[0] == '\0') ? NULL : hostname,
servname, hintp, result);
return rc;
}
I think links are not encouraged lately.
But, if you want to have services to communicate over network and explicitly here is the config:
You need to configure network an both services to attach to that network. It is something like:
networks:
network:
external: true
abcdweb:
links:
- abcdpostgres
build: .
ports:
- "8080:8080"
volumes:
- .:/abcd-myproj
container_name: someWeb
networks:
network: null
abcdpostgres:
image: postgres
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
container_name: somePostgres
networks:
network: null
In this way the service will communicate via the network with service names as adress.
I had to set my secret_key_base in secrets.yml.
With the incorrect key, my app did not have permission to resolve the database domain.
I'm running a rails app in docker that makes use of secret_key_base. The problem is that I was running the app on the production database using the development environment. The development environment entailed the development secrete_key_base. Once I began using the correct key, I could connect to the database.
The error showed up in my rails container logs as
Raven 2.13.0 configured not to capture errors: No host specified, no public_key specified, no project_id specified
See this question for how to set the secret_key_base in secrets.yml