I'm trying to query firebase from a node.js app in a Docker container. It works locally but not in the container. I have port 443 open and I can make a request to google fine. For some reason I never get a response back running in the Docker container though. I suspect it's something with websockets.
My Ports are: 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp
And in my docker syslog:
: dropping unexpected TCP packet sent from 172.18.0.3:33288 to 216.58.210.173:443 (valid sources = 192.168.65.2, 0.0.0.0)
Any on ideas on what to try?
firebase.initializeApp({
serviceAccount: firebaseKey,
databaseURL: 'https://my-firebase.firebaseio.com'
});
const userId = 'xxxxxxxxxxxx';
const ref = firebase.database().ref(`datasource/${userId}`)
.once('value').then( (snapshot) => {
console.log(snapshot.val());
return callback(null, 'ok');
}, (error) => {
console.error(error);
return callback(error);
});
And my docker-compose.yml
version: "2"
services:
test-import:
build: .
command: npm run dev
volumes:
- .:/var/www
ports:
- "7000:8080"
- "443:443"
depends_on:
- mongo
networks:
- import-net
mongo:
container_name: mongo
image: mongo
networks:
- import-net
networks:
import-net:
driver: bridge
In my case the problem was that serviceAccount.privateKey was set using an environment variable. The value of that environment variable is a multi line string and that was causing the issue. So double check that serviceAccount is correctly configured in order to solve this.
edit
I had the same problem again today. The solution was to sync the time with a NTP server because the time in the Docker container was wrong (a few days off).
Related
I'm quite new to ArangoDB and I have a requirement to spin off a docker container for ArangoDB, while executing a GO code. The limitation is that the module that enables the spin-off (testcontainers) takes in some parameters for the env settings of the docker container, maybe for the optimal utilisation of resources (For Ex. 1gb for JVM in the case of "ES_JAVA_OPTS" to spin off an Elasticsearch container).
Please let me know what these env settings could be for an Arango docker container (possibly standalone image instead of a cluster one), and how to go about setting them optimally for the spin off to actually happen at run time.
I was able to reproduce your use case using testcontainers-go. Here you have an snippet bootstrapping the runtime dependencies for your app programatically next to the tests, which imho is closer to the developer than externalizing it to a call to docker-compose, possibly called in a Makefile or similar.
You would need to add your own app container to the game ;)
package arangodb
import (
"context"
"testing"
"github.com/docker/docker/api/types/container"
"github.com/stretchr/testify/require"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
)
func TestArangoDB(t *testing.T) {
ctx := context.Background()
networkName := "backend"
newNetwork, err := testcontainers.GenericNetwork(ctx, testcontainers.GenericNetworkRequest{
NetworkRequest: testcontainers.NetworkRequest{
Name: networkName,
CheckDuplicate: true,
},
})
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
require.NoError(t, newNetwork.Remove(ctx))
})
arangodb, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: testcontainers.ContainerRequest{
Image: "arangodb/arangodb:latest",
Env: map[string]string{
"ARANGODB_USERNAME": "myuser",
"ARANGODB_PASSWORD": "mypassword",
"ARANGODB_DBNAME": "graphdb",
"ARANGO_ROOT_PASSWORD": "myrootpassword",
},
Networks: []string{networkName},
Resources: container.Resources{
Memory: 2048 * 1024 * 1024, // 2048 MB
},
WaitingFor: wait.ForLog("is ready for business"),
Mounts: testcontainers.ContainerMounts{
testcontainers.BindMount("/my/database/copy/for/arango/data", "/var/lib/arangodb3"),
},
},
Started: true,
})
require.NoError(t, err)
defer arangodb.Terminate(ctx)
redis, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: testcontainers.ContainerRequest{
Image: "redis:alpine",
Networks: []string{networkName},
},
Started: true,
})
require.NoError(t, err)
defer redis.Terminate(ctx)
}
I would suggest to use docker compose and setup different containers for running Arango and another one for running the GO code. That way you can have different env settings for each of them. Below you can see a sample docker-compose.yml file that has Arano, GO and redis. I am adding Redis to show you that you can add even more services there if needed
version: '3.9'
# Define services
services:
arangodb:
image: arangodb/arangodb:latest
ports:
- 8529:8529
volumes:
- /my/database/copy/for/arango/data:/var/lib/arangodb3
environment:
- ARANGODB_USERNAME=myuser
- ARANGODB_PASSWORD=mypassword
- ARANGODB_DBNAME=graphdb
- ARANGO_ROOT_PASSWORD=myrootpassword
container_name: "arango_docker_compose"
mem_limit: 2048m
networks:
- backend
# Golang Service
app:
# Building the docker image for the service based on a Dockerfile of your choice
build:
context: . # Use an image built from the specified dockerfile in the current directory.
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine, if needed
restart: unless-stopped
depends_on:
- redis # This service depends on redis and arangodb. Start them first.
- arangodb
environment: # Pass environment variables to the service
- REDIS_URL: redis:6379
- ARANGO_PORT: 8529
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- backend
# Redis Service
redis:
image: "redis:alpine" # Use a public Redis image to build the redis service
restart: unless-stopped
networks:
- backend
networks:
backend:
For more information about the parameters allowed on ArangoDB docker image please see ArangoDB docker hub
A sample Dockerfile for GO container can be found at here. Please bare in mind that these files are mere samples and you will need to add/replace your required configuration on them. As you want to use testcontainers you can also use the Dockerfile here
Please let me know whether that helps
Thank you!
I currently have a very strange error with docker more precisely with redis.
My backend runs with nodejs and typescript:
code
const redisPubSubOptions: any = {
host: process.env.REDIS_HOST || "127.0.0.1",
port: process.env.REDIS_PORT || 6379,
connectTimeout: 10000,
retryStrategy: (times: any) => Math.min(times * 50, 2000),
};
export const pubsub: RedisPubSub = new RedisPubSub({
publisher: new Redis(redisPubSubOptions),
subscriber: new Redis(redisPubSubOptions),
});
Dockerfile
FROM node:14-alpine as tsc-builder
WORKDIR /usr/src/app
COPY . .
RUN yarn install
EXPOSE 4000
CMD yarn run dev
docker-compose
version: "3.8"
services:
backend:
build: .
container_name: backend
ports:
- 4242:4242
depends_on:
- redis
env_file:
- ./docker/env/.env.dev
environment:
- ENVIRONMENT=development
- REDIS_PORT=6379
- REDIS_HOST=redis
redis:
image: redis:6.0.12-alpine
command: redis-server --maxclients 100000 --appendonly yes
hostname: redis
ports:
- "6379:6379"
restart: always
when I start my server the backend works and then the redis error comes after:
Error: connect ECONNREFUSED 127.0.0.1:6379
Both Redis and your backend run on different containers, so they have different IP addresses in the docker network. You are trying to connect to 127.0.0.1, which is the local address of the backend container.
Method 1:
Since you are using docker-compose (and of course it creates a network between services), you can use the service name instead of 127.0.0.1. For example:
const redisPubSubOptions: any = {
host: process.env.REDIS_HOST || "redis",
port: process.env.REDIS_PORT || 6379,
connectTimeout: 10000,
retryStrategy: (times: any) => Math.min(times * 50, 2000),
};
export const pubsub: RedisPubSub = new RedisPubSub({
publisher: new Redis(redisPubSubOptions),
subscriber: new Redis(redisPubSubOptions),
});
Method 2:
The other method is to expose the Redis port to the IP address of the Docker interface in the Host machine. Most of the time that is 172.17.0.1, but with ip -o a (If you are using Linux) you can see the Docker interface and its IP address.
so you need to do this for that:
redis:
image: redis:6.0.12-alpine
command: redis-server --maxclients 100000 --appendonly yes
hostname: redis
ports:
- "172.17.0.1:6379:6379"
restart: always
This address 172.17.0.1:6379 (Or any Docker interface IP address on the Host) should be exposed for Redis. Easily you can use this address in the application.
Note: You can handle these values using environment variable which is a better and more standard solution. You can take a look at this.
I have two docker containers frontend and data-service.
frontend is using NextJS which is only relevant because NextJS has a method called getInitialProps() which can be run on the server, or can be run in the visitor's browser (I have no control over this).
In getInitialProps() I need to call an API to get the data for the page:
fetch('http://data-service:3001/user/123').then(...
When this is called on the server the API returns fine because my frontend container has access to the internal docker network and therefor can reference the data-service using the hostname http://data-service.
When this is called on the client, however, it fails (obviously) because Docker is now exposed as http://localhost and I can't reference http://data-service anymore.
How can I configure Docker so that I can use 1 URL for both use cases. I would prefer not to have to figure out which environment I'm in in my NextJS code if possible.
If seeing my docker-compose is useful I have included it below:
version: '2.2'
services:
data-service:
build: ./data-service
command: npm run dev
volumes:
- ./data-service:/usr/src/app/
- /usr/src/app/node_modules
ports:
- "3001:3001"
environment:
SDKKEY: "whatever"
frontend:
build: ./frontend
command: npm run dev
volumes:
- ./frontend:/usr/src/app/
- /usr/src/app/node_modules
environment:
API_PORT: "3000"
API_HOST: "http://catalog-service"
ports:
- "3000:3000"
The most elegant solution I've found is described in this post: Docker-compose make 2 microservices (frontend+backend) communicate to each other with http requests
Example implementation:
In next.config.js:
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
URI: 'your-docker-uri:port'
},
publicRuntimeConfig: {
// Will be available on both server and client
URI: 'http://localhost:port'
}
}
In pages/index.js:
import getConfig from 'next/config';
const { serverRuntimeConfig, publicRuntimeConfig } = getConfig();
const API_URI = serverRuntimeConfig.apiUrl || publicRuntimeConfig.apiUrl;
const Index = ({ json }) => <div>Index</div>;
Index.getInitialProps = async () => {
...
const res = await fetch(`${API_URI}/endpoint`);
...
}
I'm working on our webhook locally before building a docker container, and want my (Linux) container to communicate with it, using host.docker.internal:ping.
It had been working before, but lately, for some reason, I'm getting this error from our graphql-engine, Hasura:
{
"timestamp":"2019-11-05T18:45:32.860+0000",
"level":"error",
"type":"webhook-log",
"detail":{
"response":null,
"url":"http://host.docker.internal:3000/simple/webhook",
"method":"GET",
"http_error":{
"type":"http_exception",
"message":"ConnectionFailure Network.Socket.getAddrInfo (called with preferred socket type/protocol: AddrInfo {addrFlags = [AI_ADDRCONFIG], addrFamily = AF_UNSPEC, addrSocketType = Stream, addrProtocol = 0, addrAddress = <assumed to be undefined>, addrCanonName = <assumed to be undefined>}, host name: Just \"host.docker.internal\", service name: Just \"3000\"): does not exist (Temporary failure in name resolution)"
},
"status_code":null
}
}
Here's my docker compose:
version: '3.6'
services:
postgres:
image: postgres:11.2
restart: always
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=...
- POSTGRES_PASSWORD=...
graphql-engine:
image: hasura/graphql-engine:latest
depends_on:
- postgres
restart: always
environment:
- HASURA_GRAPHQL_DATABASE_URL=postgres://...:...#postgres:5432/postgres
- HASURA_GRAPHQL_ACCESS_KEY=...
- HASURA_GRAPHQL_AUTH_HOOK=http://host.docker.internal:3000/simple/webhook
command:
- graphql-engine
- serve
- --enable-console
ports:
- 8080:8080
volumes:
postgres:
data:
The local project is for sure working and listening to port 3000. Nontheless, it isn't receiving any requests [as it should] from the graphql-engine container. Could it be related to our proxy?
Seemed to be an issue with Docker Desktop.
Uninstalled the whole docker environment and rebuilt it all, that fixed it.
I'm working with a docker-compose file from an open-source repo. Notably, it's missing the version and services keys, but it still works (up until now, I have not seen a compose file without these keys).
redis:
image: redis
ports:
- '6379'
app:
build: .
environment:
- LOG_LEVEL='debug'
links:
- redis
docker-compose up starts everything up and the app is able to talk to redis via 127.0.0.1:6379.
However, when I add the version and services keys back in, connections to redis are refused:
version: '3'
services:
redis:
image: redis
ports:
- '6379'
app:
build: .
environment:
- LOG_LEVEL='debug'
links:
- redis
Which results in:
[Wed Jan 03 2018 20:51:58 GMT+0000 (UTC)] ERROR { Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at Object.exports._errnoException (util.js:896:11)
at exports._exceptionWithHostPort (util.js:919:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1073:14)
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379 }
Why does adding version: '3' and services: lead to failure to connect?
You don't need to specify the ports neither the links for services in the same network (compose file). You can use:
version: '3'
services:
redis:
image: redis
app:
build: .
environment:
- LOG_LEVEL='debug'
And then in your app code refer to redis just as 'redis:6379'. If you see the Dockerfile for the redis image you can see the port is already exposed at the end.
When you want to expose the service to a specific host port, in Docker Compose version 3 you should use this syntax:
ports:
- '6379:6379'
Check the docs here:
Either specify both ports (HOST:CONTAINER), or just the container port
(a random host port will be chosen).
This is what worked for me after having the same issue:
docker-compose.yml
version: "3"
services:
server:
...
depends_on:
- redis
redis:
image: redis
My redis config file:
const redis = require('redis');
const redisHost = 'redis';
const redisPort = '6379';
let client = redis.createClient(redisPort, redisHost);
client.on('connect', () => {
console.log(`Redis connected to ${redisHost}:${redisPort}`);
});
client.on('error', (err) => {
console.log(`Redis could not connect to ${redisHost}:${redisPort}: ${err}`);
});
module.exports = client;
The port might be in use. Either kill the container using it or restarting docker will release the port.