I'm trying to setup Apache Superset.
I want to access Postgres and Redis services running on host from a Superset instance running in Docker.
(I don't want to run dockers for db and redis as I have already installed these services in my application).
I see following in superset documentation:
Note: Users often want to connect to other databases from Superset. Currently, the easiest way to do this is to modify the docker-compose-non-dev.yml file and add your database as a service that the other services depend on (via x-superset-depends-on).
Here I'm clueless on how to configure yaml file for this. I'm pretty much new to docker. Could any one please guide me? Here is the yaml file service section from docker-compose-non-dev.yml.
redis:
image: redis:7
container_name: superset_cache
restart: unless-stopped
volumes:
- redis:/data
db:
env_file: docker/.env-non-dev
image: postgres:14
container_name: superset_db
restart: unless-stopped
volumes:
- db_home:/var/lib/postgresql/data```
If I understand correctly you want to do two things:
Use existing services for those components
Just make sure they're accessible from your Superset docker container and then specify the paths to those resources in superset_config.py or superset_config_docker.py, which is presumably where you're setting other config options. E.g., to point to your Postgres service as the backend database, specify:
SQLALCHEMY_DATABASE_URI='postgresql://username:password#path/dbname'
Where path is the URL or IP address of your Postgres instance. For Redis, it's not as clear but I think you'd set that location with REDIS_HOST= in your config. And you can use a .env file to store these strings instead of putting them directly in your config file.
Not run the unnecessary containers
You should be able to just delete the container specs you have in your post and remove both of them from the x-superset-depends-on: &superset-depends-on list at the top of the docker-compose file.
Related
I am running my NextJs application using docker. Multiple hosts are pointed to the container. API calls/static asset fetches only reach NextJs build if I add all the hosts in docker-compose (extra_hosts). Is there any way I can access the APIs without adding all new hosts in docker_compose?
I need the origin hostnames for these requests, because configs are domain based in the system.
Below is the sample of existing docker-compose file.
image: "403.dkr.ecr.ap-southeast-1.amazonaws.com/whitelabel-pp-3005:latest"
restart: always
environment:
NODE_ENV: production
ports:
- "3005:3005"
extra_hosts:
- "host1:127.0.0.1"
- "host2:127.0.0.1"
- "host3:127.0.0.1"
We have a develop and a production system that use symfony 5 + nginx + MySQL services running in a docker environment.
At the moment the nginx webserver runs in the same container as the symfony service because of this issue:
On our develop environment we are able to mount the symfony sourcecode into the docker container (by a docker-compose file).
In our production environment we need to deliver containers that contains all the source code inside because we must not put our source code on the server. So there is no folder on the server from which we can mount our source code.
Unfortunately nginx needs the sourceode as well to make his routing decisions so we decided to put the symfony and the nginx services together in one container.
Now we want to clean this up to have a better solution by run every service in its own container:
version: '3.5'
services:
php:
image: docker_sandbox
build: ../.
...
volumes:
- docker_sandbox_src:/var/www/docker_sandbox // <== VOLUME
networks:
- docker_sandbox_net
...
nginx:
image: nginx:1.19.0-alpine
...
volumes:
- ./nginx/server.conf:/etc/nginx/conf.d/default.conf:ro
- docker_sandbox_src:/var/www/docker_sandbox <== VOLUME
...
networks:
- docker_sandbox_net
depends_on:
- php
mysql:
...
volumes:
docker_sandbox_src:
networks:
docker_sandbox_net:
driver: bridge
One possible solution is to use a named volume that connects the nginx service with the symfony service. The problem with that is that on an update of our symfony image the volume keeps the old changes. So there is no update until we manually delete this volume.
Is there a better way to handle this issue? May be a volume that is able to overwrite its content when a new image is deployed. Or an nginx config that does not require the symfony source code in its own container.
I am new to docker and I am trying to dockerize this application I have written in Golang. It is a simple web server that interacts with rabbitmq and mongodb
It takes the creadentials form a toml file and loads it into a config struct before starting the application server on port 3000. These are the credentials
mongo_server = "localhost"
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#localhost:5672/"
If it can't connect to these urls it fails with an error. Following is my docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq
ports:
- 5672:5672
mongodb:
image: mongo
ports:
- 27017:27017
web:
build: .
image: palash2504/collect
container_name: collect_service
ports:
- 3000:3000
depends_on:
- rabbitmq
- mongodb
links: [rabbitmq, mongodb]
But it fails to connect with rabbitmq on the url used for local development i.e. amqp://guest:guest#localhost:5672/
I realise that the rabbitmq container might be running on some different address other than the one provided in the config file.
I would like to know the correct way for setting any env credentials to be able to connect my app to rabbitmq.
Also what approach would be best to change my application code for initializing connections to external services? I was thinking about ditching the config.toml file and using os.Getenv and os.Setenv to get the urls for connections.
Localhost addresses are resolved, well, locally. They thus will not work inside containers, since they will look for a local address (i.e. inside the container).
Services can access each other by using service names as an address. So in the web container you can target mongodb for example.
You might give this a shot:
mongo_server = mongodb
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#rabbitmq/"
It is advisable to set service target environment variables in the compose file itself:
#docker-compose.yml
#...other stuff...
web:
#...other stuff...
environment:
RABBITMQ_SERVER: rabbitmq
MONGO_SERVER: mongodb
depends_on:
- rabbitmq
- mongodb
This gives you a single place to make adjustments to the configuration.
As a side note, to me it seems that links: [rabbitmq, mongodb] can be removed. And I would advise not to alter the container name (remove container_name: collect_service unless it is necessary)
Im trying to setup an application environment with two different docker-compose.yml files. The first one creates services in the default network elastic-apm-stack_default. To reach the services of both docker-compose files I used the external command within the second docker-compose file. Both files look like this:
# elastic-apm-stack/docker-compose.yml
services:
apm-server:
image: docker.elastic.co/apm/apm-server:6.2.4
build: ./apm_server
ports:
- 8200:8200
depends_on:
- elasticsearch
- kibana
...
# sockshop/docker-compose.yml
services:
front-end:
...
...
networks:
- elastic-apm-stack_default
networks:
elastic-apm-stack_default:
external: true
Now the front-end service in the second file needs to send data to the apm-server service in the first file. Therefore, I used the url http://apm-server:8200 in the source code of the front-end service but i always get an connectionRefused error. If I define all services in a single docker-compose file it works but I want to separate the docker-compose files.
Could anyone help me? :)
Docker containers run in network 172.17.0.1
So, you may use url
http://172.17.0.1:8200
to get access to your apm-server container.
Working on getting two different services running inside a single docker-compose.yml to communicate w. each other within docker-compose.
The two services are regular NodeJS servers (app1 & app2). app1 receives POST requests from an external source, and should then send a request to the other NodeJS server, app2 w. information based on the initial POST request.
The challenge that I'm facing is how to make the two NodeJS containers communicate w. each other w/o hardcoding a specific container name. The only way I can get the two containers to communicate currently, is to hardcode a url like: http://myproject_app1_1, which will then direct the POST request from app1 to app2 correctly, but due to the way Docker increments container names, it doesn't scale very well nor support potential container crashing etc.
Instead I'd prefer to send the POST request to something along the lines of http://app2 or a similar way to handle and alias a number of containers, and no matter how many instances of the app2 container exists Docker will pass the request one of the running app2 containers.
Here's a sample of my docker-compose.yml file:
version: '2'
services:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
# databases [...]
Thanks in advance.
Ok. This is two questions.
First: how to don`t hardcode container names.
you can use system environment variables like:
nodeJS file:
app2Address = process.env.APP2_ADDRESS;
response = http.request(app2Address);
docker compose file:
app1:
image: 'mhart/alpine-node:6.3.0'
container_name: app1
command: npm start
environment:
- APP2_ADDRESS: ${app2_address}
app2:
image: 'mhart/alpine-node:6.3.0'
container_name: app2
command: npm start
environment:
- HOSTNAME: ${app2_address}
and .env file like:
app2_address=myapp2.com
also you can use wildcard application config file. And when container starts you need to substitute real hostname.
for this action you need create entrypoint.sh and use "sed" like:
sed -i '/sAPP_2HOSTNAME_WILDCARD/${app2_address}/g /app1/congig.js
Second. how to make a transparent load balancing:
you need use http load balancer like
haproxy
nginx as load balancer
There is hello-world tutorial how to make a load balancing with docker
When you run two containers from one compose file, docker automatically sets up an "internal dns" that allows to reference other containers by their service name defined in the compose file (assuming they are in the same network). So this should work when referencing http://app2 from the first service.
See this example proxying requests from proxy to the backend whoamiapp by just using the service name.
default.conf
server {
listen 80;
location / {
proxy_pass http://whoamiapp;
}
}
docker-compose.yml
version: "2"
services:
proxy:
image: nginx
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- "80:80"
whoamiapp:
image: emilevauge/whoami
Run it using docker-compose up -d and try running curl <dockerhost>.
This sample uses the default network with docker-compose file version 2. You can read more about how networking with docker-compose works here: https://docs.docker.com/compose/networking/
Probably your configuration of the container_name property somehow interferes with this behaviour? You should not need to define this on your own.