Cant create a number format in Docker using Apache Superset - docker
I am trying to create a BRL number format (R$123,345,789.22) for my big number charts but i don't know how to do that... I looked at this solution here: Customise the number format in Apache superset but I can't make it work. I think it is because the superset is installed locally via docker containers so it just downloads the images and it doesn't matter if I change my local superset files it doesn't change anything in the app (don't know much about docker btw). here is the docker-compose file to build superset
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
x-superset-image: &superset-image apache/superset:${TAG:-latest-dev}
x-superset-depends-on: &superset-depends-on
- db
- redis
x-superset-volumes: &superset-volumes
# /app/pythonpath_docker will be appended to the PYTHONPATH in the final container
- ./docker:/app/docker
- superset_home:/app/superset_home
version: "3.7"
services:
redis:
image: redis:latest
container_name: superset_cache
restart: unless-stopped
volumes:
- redis:/data
db:
env_file: docker/.env-non-dev
image: postgres:10
container_name: superset_db
restart: unless-stopped
volumes:
- db_home:/var/lib/postgresql/data
superset:
env_file: docker/.env-non-dev
image: *superset-image
container_name: superset_app
command: ["/app/docker/docker-bootstrap.sh", "app-gunicorn"]
user: "root"
restart: unless-stopped
ports:
- 8088:8088
depends_on: *superset-depends-on
volumes: *superset-volumes
superset-init:
image: *superset-image
container_name: superset_init
command: ["/app/docker/docker-init.sh"]
env_file: docker/.env-non-dev
depends_on: *superset-depends-on
user: "root"
volumes: *superset-volumes
superset-worker:
image: *superset-image
container_name: superset_worker
command: ["/app/docker/docker-bootstrap.sh", "worker"]
env_file: docker/.env-non-dev
restart: unless-stopped
depends_on: *superset-depends-on
user: "root"
volumes: *superset-volumes
superset-worker-beat:
image: *superset-image
container_name: superset_worker_beat
command: ["/app/docker/docker-bootstrap.sh", "beat"]
env_file: docker/.env-non-dev
restart: unless-stopped
depends_on: *superset-depends-on
user: "root"
volumes: *superset-volumes
volumes:
superset_home:
external: false
db_home:
external: false
redis:
external: false
the modified file as in Customise the number format in Apache superset :
import {
createDurationFormatter,
createD3NumberFormatter,
getNumberFormatter,
getNumberFormatterRegistry,
NumberFormats,
getTimeFormatterRegistry,
smartDateFormatter,
smartDateVerboseFormatter,
} from '#superset-ui/core';
export default function setupFormatters() {
getNumberFormatterRegistry()
// Add shims for format strings that are deprecated or common typos.
// Temporary solution until performing a db migration to fix this.
.registerValue(',0', getNumberFormatter(',.4~f'))
.registerValue('null', getNumberFormatter(',.4~f'))
.registerValue('%', getNumberFormatter('.0%'))
.registerValue('.', getNumberFormatter('.4~f'))
.registerValue(',f', getNumberFormatter(',d'))
.registerValue(',r', getNumberFormatter(',.4f'))
.registerValue('0f', getNumberFormatter(',d'))
.registerValue(',#', getNumberFormatter(',.4~f'))
.registerValue('$,f', getNumberFormatter('$,d'))
.registerValue('0%', getNumberFormatter('.0%'))
.registerValue('f', getNumberFormatter(',d'))
.registerValue(',.', getNumberFormatter(',.4~f'))
.registerValue('.1%f', getNumberFormatter('.1%'))
.registerValue('1%', getNumberFormatter('.0%'))
.registerValue('3%', getNumberFormatter('.0%'))
.registerValue(',%', getNumberFormatter(',.0%'))
.registerValue('.r', getNumberFormatter('.4~f'))
.registerValue('$,.0', getNumberFormatter('$,d'))
.registerValue('$,.1', getNumberFormatter('$,.1~f'))
.registerValue(',0s', getNumberFormatter(',.4~f'))
.registerValue('%%%', getNumberFormatter('.0%'))
.registerValue(',0f', getNumberFormatter(',d'))
.registerValue('+,%', getNumberFormatter('+,.0%'))
.registerValue('$f', getNumberFormatter('$,d'))
.registerValue('+,', getNumberFormatter(NumberFormats.INTEGER_SIGNED))
.registerValue(',2f', getNumberFormatter(',.4~f'))
.registerValue(',g', getNumberFormatter(',.4~f'))
.registerValue('int', getNumberFormatter(NumberFormats.INTEGER))
.registerValue('.0%f', getNumberFormatter('.1%'))
.registerValue('$,0', getNumberFormatter('$,.4f'))
.registerValue('$,0f', getNumberFormatter('$,.4f'))
.registerValue('$,.f', getNumberFormatter('$,.4f'))
.registerValue('DURATION', createDurationFormatter())
.registerValue(
'DURATION_SUB',
createDurationFormatter({ formatSubMilliseconds: true }),
);
.registerValue(
'CURRENCY_BRAZIL',
createD3NumberFormatter({
locale: {
decimal: ',',
thousands: '.',
currency: ['R$', ''],
},
formatString: '$,.2f',
}),
)
getTimeFormatterRegistry()
.registerValue('smart_date', smartDateFormatter)
.registerValue('smart_date_verbose', smartDateVerboseFormatter)
.setDefaultKey('smart_date');
}
so my question is: How can i create a custom number format on docker superset?
I have got this answer from a fellow gentleman made-of-imposter-syndr :
" From what I can see, you are using docker-compose-non-dev.yml as your compose file, which uses pre-built frontend assets, which is why you are not able to see the changes you make.
Try running docker-compose -f docker-compose.yml up or simply, docker-compose up (If a file with the name docker-compose.yml file exists, docker-compose up automatically picks that up)"
however, I tried running "docker-compose up" to run superset but now whenever I go to localhost:8088 it shows a weird blank screen:
blank screen
so I cant run superset using docker-compose.yml, it only runs with docker-compose-non-dev.yml but as mentioned above apparently I can't change the code that way.
this is the link of the docker-compose up output logs on my terminal:
https://pastebin.com/iyFBbWdM
can someone help me solve this blank screen?
From what I can see, you are using docker-compose-non-dev.yml as your compose file, which uses pre-built frontend assets, which is why you are not able to see the changes you make.
Try running docker-compose -f docker-compose.yml up or simply, docker-compose up (If a file with the name docker-compose.yml file exists, docker-compose up automatically picks that up)
More information about that here, in superset documentation
If you are interested in learning more about docker in general, this tutorial was quite useful for me
Related
Apache Superset Configuration in local machine not loading properly
Tried configuring the Superset in local machine, but it ends up with broken links and images as attached below..enter image description here Tried increasing the RAM size in docker preferences and also tried removing all the existing containers and volume and rebuild it, but the issue still remains the same. Here by attached the terminal message, A Default SECRET_KEY was detected, please use superset_config.py to override it. superset_init | Use a strong complex alphanumeric string and use a tool to help you generate superset_init | a sufficiently random sequence, ex: openssl rand -base64 42 superset_init | -------------------------------------------------------------------------------- superset_init | -------------------------------------------------------------------------------- superset_init | 2022-08-25 06:18:59,927:INFO:superset.utils.logging_configurator:logging was configured successfully superset_init | 2022-08-25 06:18:59,954:INFO:root:Configured event logger of type superset_worker_beat | [2022-08-25 06:19:00,014: INFO/MainProcess] Scheduler: Sending due task reports.scheduler (reports.scheduler) superset_worker | [2022-08-25 06:19:00,041: INFO/MainProcess] Task reports.scheduler[9e6d7309-2a70-4ca7-82be-e5c96e8aa206] received superset_node | npm WARN config optional Use `--omit=optional` to exclude optional dependencies, or superset_node | npm WARN config `--include=optional` to include them. superset_node | npm WARN config superset_node | npm WARN config Default value does install optional deps unless otherwise omitted. superset_node | npm WARN using --force Recommended protections disabled. superset_worker | [2022-08-25 06:19:00,184: INFO/ForkPoolWorker-1] Task reports.scheduler[9e6d7309-2a70-4ca7-82be-e5c96e8aa206] succeeded in 0.13767425700007152s: None superset_init | /usr/local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py:68: SAWarning: relationship 'SqlaTable.slices' will copy column tables.id to column slices.datasource_id, which conflicts with relationship(s): 'Slice.table' (copies tables.id to slices.datasource_id). If this is not the intention, consider if these relationships should be linked with back_populates, or if viewonly=True should be applied to one or more if they are read-only. For the less common case that foreign key constraints are partially overlapping, the orm.foreign() annotation can be used to isolate the columns that should be written towards. To silence this warning, add the parameter 'overlaps="table"' to the 'SqlaTable.slices' relationship. (Background on this error at: https://sqlalche.me/e/14/qzyx) superset_init | for prop in class_mapper(obj).iterate_properties: superset_init | 2022-08-25 06:19:09,924:INFO:superset.utils.database:Creating database reference for examples superset_init | 2022-08-25 06:19:11,021:DEBUG:superset.models.core:Database.get_sqla_engine(). Masked URL: postgresql://superset:XXXXXXXXXX#db:5432/superset superset_app | 127.0.0.1 - - [25/Aug/2022 06:19:24] "GET /health HTTP/1.1" 200 - superset_app | 2022-08-25 06:19:24,675:INFO:werkzeug:127.0.0.1 - - [25/Aug/2022 06:19:24] "GET /health HTTP/1.1" 200 - Commands tried : docker compose down -v docker compose up But it works well with the non-dev environment with the following command: docker-compose -f docker-compose-non-dev.yml pull docker-compose -f docker-compose-non-dev.yml up Hereby attached the docker-compose.yml file # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # x-superset-image: &superset-image apache/superset:${TAG:-latest-dev} x-superset-user: &superset-user root x-superset-depends-on: &superset-depends-on - db - redis x-superset-volumes: &superset-volumes # /app/pythonpath_docker will be appended to the PYTHONPATH in the final container - ./docker:/app/docker - ./superset:/app/superset - ./superset-frontend:/app/superset-frontend - superset_home:/app/superset_home - ./tests:/app/tests version: "3.7" services: redis: image: redis:latest container_name: superset_cache restart: unless-stopped ports: - "127.0.0.1:6379:6379" volumes: - redis:/data db: env_file: docker/.env image: postgres:14 container_name: superset_db restart: unless-stopped ports: - "127.0.0.1:5432:5432" volumes: - db_home:/var/lib/postgresql/data superset: env_file: docker/.env image: *superset-image container_name: superset_app command: ["/app/docker/docker-bootstrap.sh", "app"] restart: unless-stopped ports: - 8088:8088 user: *superset-user depends_on: *superset-depends-on volumes: *superset-volumes environment: CYPRESS_CONFIG: "${CYPRESS_CONFIG}" superset-websocket: container_name: superset_websocket build: ./superset-websocket image: superset-websocket ports: - 8080:8080 depends_on: - redis # Mount everything in superset-websocket into container and # then exclude node_modules and dist with bogus volume mount. # This is necessary because host and container need to have # their own, separate versions of these files. .dockerignore # does not seem to work when starting the service through # docker-compose. # # For example, node_modules may contain libs with native bindings. # Those bindings need to be compiled for each OS and the container # OS is not necessarily the same as host OS. volumes: - ./superset-websocket:/home/superset-websocket - /home/superset-websocket/node_modules - /home/superset-websocket/dist environment: - PORT=8080 - REDIS_HOST=redis - REDIS_PORT=6379 - REDIS_SSL=false superset-init: image: *superset-image container_name: superset_init command: ["/app/docker/docker-init.sh"] env_file: docker/.env depends_on: *superset-depends-on user: *superset-user volumes: *superset-volumes environment: CYPRESS_CONFIG: "${CYPRESS_CONFIG}" superset-node: image: node:16 container_name: superset_node command: ["/app/docker/docker-frontend.sh"] env_file: docker/.env depends_on: *superset-depends-on volumes: *superset-volumes superset-worker: image: *superset-image container_name: superset_worker command: ["/app/docker/docker-bootstrap.sh", "worker"] env_file: docker/.env restart: unless-stopped depends_on: *superset-depends-on user: *superset-user volumes: *superset-volumes # Bump memory limit if processing selenium / thumbnails on superset-worker # mem_limit: 2038m # mem_reservation: 128M superset-worker-beat: image: *superset-image container_name: superset_worker_beat command: ["/app/docker/docker-bootstrap.sh", "beat"] env_file: docker/.env restart: unless-stopped depends_on: *superset-depends-on user: *superset-user volumes: *superset-volumes superset-tests-worker: image: *superset-image container_name: superset_tests_worker command: ["/app/docker/docker-bootstrap.sh", "worker"] env_file: docker/.env environment: DATABASE_HOST: localhost DATABASE_DB: test REDIS_CELERY_DB: 2 REDIS_RESULTS_DB: 3 REDIS_HOST: localhost network_mode: host depends_on: *superset-depends-on user: *superset-user volumes: *superset-volumes volumes: superset_home: external: false db_home: external: false redis: external: false and here is the docker-compose-non-dev.yml file, # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # x-superset-image: &superset-image apache/superset:${TAG:-latest-dev} x-superset-depends-on: &superset-depends-on - db - redis x-superset-volumes: &superset-volumes # /app/pythonpath_docker will be appended to the PYTHONPATH in the final container - ./docker:/app/docker - superset_home:/app/superset_home version: "3.7" services: redis: image: redis:latest container_name: superset_cache restart: unless-stopped volumes: - redis:/data db: env_file: docker/.env-non-dev image: postgres:14 container_name: superset_db restart: unless-stopped volumes: - db_home:/var/lib/postgresql/data superset: env_file: docker/.env-non-dev image: *superset-image container_name: superset_app command: ["/app/docker/docker-bootstrap.sh", "app-gunicorn"] user: "root" restart: unless-stopped ports: - 8088:8088 depends_on: *superset-depends-on volumes: *superset-volumes superset-init: image: *superset-image container_name: superset_init command: ["/app/docker/docker-init.sh"] env_file: docker/.env-non-dev depends_on: *superset-depends-on user: "root" volumes: *superset-volumes superset-worker: image: *superset-image container_name: superset_worker command: ["/app/docker/docker-bootstrap.sh", "worker"] env_file: docker/.env-non-dev restart: unless-stopped depends_on: *superset-depends-on user: "root" volumes: *superset-volumes superset-worker-beat: image: *superset-image container_name: superset_worker_beat command: ["/app/docker/docker-bootstrap.sh", "beat"] env_file: docker/.env-non-dev restart: unless-stopped depends_on: *superset-depends-on user: "root" volumes: *superset-volumes volumes: superset_home: external: false db_home: external: false redis: external: false Is there any alternative way to make it work in the local env?
How to use docker-compose links to access username and password details from inside another container
I have a docker-compose file which contains details of my container as well as rabbitmq. Here is a cut down version of my docker-compose.yml file where I am using the container_name and links keywords to access the IP address of rabbitmq from inside my container. version: "3.2" environment: &my-env My_TEST_VAR1: 'test_1' My_TEST_VAR2: 'test_2' rabbitmq: container_name: rabbitmq image: 'rabbitmq:3.6-management-alpine' ports: - '5672:5672' - '15672:15672' environment: AMQP_URL: 'amqp://rabbitmq?connection_attempts=5&retry_delay=5' RABBITMQ_DEFAULT_USER: "guest" RABBITMQ_DEFAULT_PASS: "guest" my-service: tty: true image: my_image_name:latest working_dir: /opt/services/my_service/ command: python3.8 my_script.py ports: - 9000:9000 links: - rabbitmq:rabbitmq.server environment: <<: *my-env From inside my container I can ping the rabbitmq server successfully via: ping rabbitmq.server Is there I way I can access the rabbitmq default username and password using this link? Or do I have to just pass them as separate environment variables? (I would like to avoid this duplication if possible)
You should pass them using environment variables. Docker links at this point are an obsolete feature, and I'd recommend just outright deleting any links: you have left in your docker-compose.yml file. Compose sets up networking for you so that the Compose service names rabbitmq and my-server can be used as host names between the containers without any special setup; the environment variables that links provided were confusing and could unintentionally leak data. If you want to avoid repeating things, you can use YAML anchor syntax as you already have, or write the environment variables into a separate env_file:. Unless you have a lot of settings or a lot of containers, just writing them in the docker-compose.yml file is easiest. version: '3.8' services: rabbitmq: image: 'rabbitmq:3.6-management-alpine' ports: - '5672:5672' - '15672:15672' environment: RABBITMQ_DEFAULT_USER: "guest" RABBITMQ_DEFAULT_PASS: "guest" # You may want volumes: to persist the queue. # As a special case for RabbitMQ only, you would need a hostname:. my-service: image: my_image_name:latest ports: - 9000:9000 environment: # I'd just write these out. My_TEST_VAR1: 'test_1' My_TEST_VAR2: 'test_2' RABBITMQ_HOST: rabbitmq RABBITMQ_USER: guest RABBITMQ_PASSWORD: guest # working_dir: and command: should be in your Dockerfile as # WORKDIR and CMD respectively. links: is obsolete. In principle you can attach an anchor to any YAML node, though I'd find the syntax a little bit confusing if I was reading it. I'd tend to avoid syntax like this but it is technically an option. services: rabbitmq: environment: RABBITMQ_DEFAULT_USER: &rabbitmq_user guest my-app: environment: RABBITMQ_USER: *rabbitmq_user Finally, I hinted initially that the obsolete Docker links feature does republish environment variables. I wouldn't take advantage of this – it's an information leak in many ways, there's potential conflicts with the application's own environment variables, and the links feature in general is considered obsolete – but it is theoretically possible to use it services: rabbitmq: environment: RABBITMQ_DEFAULT_USER: guest my-app: links: [rabbitmq] docker-compose run my-app sh -c 'echo $RABBITMQ_DEFAULT_USER' It'd be up to your application setup to understand the RabbitMQ image setup variables.
Docker Compose Error: services Additional property * is not allowed
Question Is the following output an error? Target I want to run frontend, backend and a database container through Docker. I want to hot reload my docker-compose builds on code changes. Context If I run this on PowerShell: docker-compose build; docker-compose up -d, I ran into this: services Additional property mongodb is not allowed services Additional property mongodb is not allowed docker-compose.yml: version: '3.8' services: api: build: ./api container_name: api ports: - 4080:4080 networks: - network-backend - network-frontend depends_on: - 'mongodb' volumes: - .:/code mongodb: image: mongo restart: always environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: example ports: - 27017:27017 networks: - network-backend volumes: - db-data:/mongo-data volumes: db-data: networks: network-backend: network-frontend: I thought this is regarded to this issue.
OK found the answer. There are a weird chars in the config file. VS Code and Notebook don't showed me the chars. After testing a couple online YAML validators, I detected the issue. Youtube Video of the Error
Druid Docker different than Local Druid
Goal: I am testing out different distributions of Apache Druid. My goal is to eventually use Druid in a docker-compose setup. Expected Results: Druid 0.21.1 installed and run locally (with start-micro-quickstart) is the same as Druid 0.21.1 run with docker-compose. Actual results: Druid installed and run locally has a data ingestion page that looks like this: The docker-compose.yml from the Druid repository does not run as is because it references a druid:0.22.0 image when 0.21.1 is the latest available. When the references to 0.22.0 are changes to 0.21.0, docker-compose succeeds, but the data ingestion page looks like this: Note specifically the lack of native support for Apache Kafka data ingestion which is what I am using. Question: Why are the local and docker builds different? Is there any way to get the start-micro-quickstart version in docker? docker-compose.yml: # # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # version: "2.2" volumes: metadata_data: {} middle_var: {} historical_var: {} broker_var: {} coordinator_var: {} router_var: {} druid_shared: {} services: postgres: container_name: postgres image: postgres:latest volumes: - metadata_data:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=FoolishPassword - POSTGRES_USER=druid - POSTGRES_DB=druid # Need 3.5 or later for container nodes zookeeper: container_name: zookeeper image: zookeeper:3.5 ports: - "2181:2181" environment: - ZOO_MY_ID=1 coordinator: image: apache/druid:0.21.1 container_name: coordinator volumes: - druid_shared:/opt/shared - coordinator_var:/opt/druid/var depends_on: - zookeeper - postgres ports: - "8081:8081" command: - coordinator env_file: - environment broker: image: apache/druid:0.21.1 container_name: broker volumes: - broker_var:/opt/druid/var depends_on: - zookeeper - postgres - coordinator ports: - "8082:8082" command: - broker env_file: - environment historical: image: apache/druid:0.21.1 container_name: historical volumes: - druid_shared:/opt/shared - historical_var:/opt/druid/var depends_on: - zookeeper - postgres - coordinator ports: - "8083:8083" command: - historical env_file: - environment middlemanager: image: apache/druid:0.21.1 container_name: middlemanager volumes: - druid_shared:/opt/shared - middle_var:/opt/druid/var depends_on: - zookeeper - postgres - coordinator ports: - "8091:8091" - "8100-8105:8100-8105" command: - middleManager env_file: - environment router: image: apache/druid:0.21.1 container_name: router volumes: - router_var:/opt/druid/var depends_on: - zookeeper - postgres - coordinator ports: - "8888:8888" command: - router env_file: - environment
Edit: After restarting the docker container, it now matches the local Druid build. I have no idea why I got the other version initially, but if you have this problem try restarting the container. If anyone has a real answer please add it, but I don't want to leave this open.
Multiple Rails Application docker up not working
I have two Rails 6 application and I am trying to deploy in aws ec2 instance with different port 8080 and 8081 but when I trying to run docker-compose up -d it start one rails application successfully and if I tries to run docker-compose up -d for second application, It make first application down and make another application up on particular Port Below is my docker configuration for two applications. Application 1 version: "3.4" services: app: image: "dockerhub_repo/a_api:${TAG}" # build: # context: . # dockerfile: Dockerfile container_name: a_api_container depends_on: - database - redis - sidekiq ports: - "8080:8080" volumes: - .:/app env_file: .env environment: RAILS_ENV: staging database: image: postgres:12.1 container_name: a_database_container restart: always volumes: - db_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sql sidekiq: image: "dockerhub_repo/a_api:${STAG}" container_name: a_sidekiq_container environment: RAILS_ENV: staging env_file: .env depends_on: - redis volumes: - ".:/app" redis: image: redis:4.0-alpine container_name: a_redis_container volumes: - "redis:/data" volumes: redis: db_data: Application 2 version: "3.4" services: app: image: "dockerhub_repo/b_api:${PPTAG}" build: context: . dockerfile: Dockerfile container_name: b_api depends_on: - database - redis ports: - "8081:8081" volumes: - .:/app env_file: .env environment: RAILS_ENV: development database: image: postgres:12.1 container_name: pp_database restart: always volumes: - db_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sql redis: image: redis:4.0-alpine container_name: pp_redis volumes: db_data: This Configuration works very well in local machine. It start both application in local on different port but it has some issue on aws ec2. I am not sure is any thing wrong in configuration?
Compose has the notion of a project name. If you add or delete containers from a docker-compose.yml file, it looks for existing containers that are labeled with the project name to figure out what needs to change. The project name is also included in the Docker names of containers, networks, and volumes. You can configure the project name with the COMPOSE_PROJECT_NAME environment variable or the docker-compose -p option. If you don't configure it, it defaults to the base name of the current directory. You clarify in a comment that the two docker-compose.yml files are in directories app1/backend and app2/backend. Since the base name of those directories are both backend, they have the same project name; so if you run docker-compose up in the app2/backend directory, it finds the existing containers for the backend project, sees they don't match what's in the docker-compose.yml file, and deletes them (even though you as the operator think they belong to the other project). There are a couple of ways to get around this: Rename one or the other directory; maybe move the docker-compose.yml files up to the top-level app1 and app2 directories. In one or both directories, create a .env file that sets COMPOSE_PROJECT_NAME=app1. (Note that file is checked in the current directory, not necessarily the directory that contains the docker-compose.yml file.) Set and change an environment variable export COMPOSE_PROJECT_NAME=app1. Consistently use an option docker-compose -p app1 ... with all Compose commands.