node-red localhost not connecting - docker

I am running node-red in docker compose and from the gitlab-cli file I am calling docker/compose image and my pipeline is working and I can see this:
node-red | 11 Nov 11:28:51 - [info]
462node-red |
463node-red | Welcome to Node-RED
464node-red | ===================
465node-red |
466node-red | 11 Nov 11:28:51 - [info] Node-RED version: v3.0.2
467node-red | 11 Nov 11:28:51 - [info] Node.js version: v16.16.0
468node-red | 11 Nov 11:28:51 - [info] Linux 5.15.49-linuxkit x64 LE
469node-red | 11 Nov 11:28:52 - [info] Loading palette nodes
470node-red | 11 Nov 11:28:53 - [info] Settings file : /data/settings.js
471node-red | 11 Nov 11:28:53 - [info] Context store : 'default' [module=memory]
472node-red | 11 Nov 11:28:53 - [info] User directory : /data
473node-red | 11 Nov 11:28:53 - [warn] Projects disabled : editorTheme.projects.enabled=false
474node-red | 11 Nov 11:28:53 - [info] Flows file : /data/flows.json
475node-red | 11 Nov 11:28:53 - [warn]
476node-red |
477node-red | ---------------------------------------------------------------------
478node-red | Your flow credentials file is encrypted using a system-generated key.
479node-red |
480node-red | If the system-generated key is lost for any reason, your credentials
481node-red | file will not be recoverable, you will have to delete it and re-enter
482node-red | your credentials.
483node-red |
484node-red | You should set your own key using the 'credentialSecret' option in
485node-red | your settings file. Node-RED will then re-encrypt your credentials
486node-red | file using your chosen key the next time you deploy a change.
487node-red | ---------------------------------------------------------------------
488node-red |
489node-red | 11 Nov 11:28:53 - [info] Server now running at http://127.0.0.1:1880/
490node-red | 11 Nov 11:28:53 - [warn] Encrypted credentials not found
491node-red | 11 Nov 11:28:53 - [info] Starting flows
492node-red | 11 Nov 11:28:53 - [info] Started flows
but when I am trying to open the localhost server to access the node-red or the dashboard, I am getting the error "Failed to open page"
This is my docker-compose.yml
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
user: '1000'
container_name: node-red
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
This is my .gitlab-cli.yml
yateen-docker:
stage: build
image:
name: docker/compose
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker-compose up
only:
- main
Any help!
I tried to create the node-red docker via docker-compose not just by running docker run command. Though my node-red image is running but I can't access the server page

Related

Docker Image DynamoDB unable to open database file

I am attempting to run the docker image of the AWS DynamoDB service using a local volume to store data. When I try to create my first table I am getting the following order repeating until eventually the script fails.
dynamodb-local | Aug 31, 2022 3:32:49 PM com.almworks.sqlite4java.Internal log
dynamodb-local | WARNING: [sqlite] SQLiteQueue[shared-local-instance.db]: stopped abnormally, reincarnating in 3000ms
dynamodb-local | Aug 31, 2022 3:32:52 PM com.almworks.sqlite4java.Internal log
dynamodb-local | WARNING: [sqlite] cannot open DB[5]: com.almworks.sqlite4java.SQLiteException: [14] unable to open database file
dynamodb-local | Aug 31, 2022 3:32:52 PM com.almworks.sqlite4java.Internal log
dynamodb-local | SEVERE: [sqlite] SQLiteQueue[shared-local-instance.db]: error running job queue
dynamodb-local | com.almworks.sqlite4java.SQLiteException: [14] unable to open database file
dynamodb-local | at com.almworks.sqlite4java.SQLiteConnection.open0(SQLiteConnection.java:1480)
dynamodb-local | at com.almworks.sqlite4java.SQLiteConnection.open(SQLiteConnection.java:282)
dynamodb-local | at com.almworks.sqlite4java.SQLiteConnection.open(SQLiteConnection.java:293)
dynamodb-local | at com.almworks.sqlite4java.SQLiteQueue.openConnection(SQLiteQueue.java:464)
dynamodb-local | at com.almworks.sqlite4java.SQLiteQueue.queueFunction(SQLiteQueue.java:641)
dynamodb-local | at com.almworks.sqlite4java.SQLiteQueue.runQueue(SQLiteQueue.java:623)
dynamodb-local | at com.almworks.sqlite4java.SQLiteQueue.access$000(SQLiteQueue.java:77)
dynamodb-local | at com.almworks.sqlite4java.SQLiteQueue$1.run(SQLiteQueue.java:205)
dynamodb-local | at java.lang.Thread.run(Thread.java:750)
My docker-compose file is based on the one from AWS documentation (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.DownloadingAndRunning.html)
version: "3.9" # optional since v1.27.0
services:
server:
build:
context: ./server/
dockerfile: Dockerfile.consumer
ports:
- "5558:5558"
depends_on:
- "dynamodb-local"
environment:
AWS_ACCESS_KEY_ID: 'DUMMYIDEXAMPLE'
AWS_SECRET_ACCESS_KEY: 'DUMMYEXAMPLEKEY'
dynamodb-local:
image: "amazon/dynamodb-local:latest"
ports:
- "8000:8000"
volumes:
- "data:/home/dynamodblocal/data"
command: "-jar DynamoDBLocal.jar -sharedDb -dbPath ./data"
container_name: dynamodb-local
working_dir: /home/dynamodblocal
volumes:
logvolume01: {}
data: {}
Below is the table I am creating
import boto3
from pprint import pprint
from botocore.exceptions import ClientError
TABLE_NAME = "check_tbl"
dynamodb = boto3.client(
'dynamodb',
region_name="us-east-1",
aws_access_key_id="dummy_access_key",
aws_secret_access_key="dummy_secret_key",
endpoint_url="http://localhost:8000",
)
try:
response = dynamodb.describe_table(TableName=TABLE_NAME)
except ClientError as ce:
if ce.response['Error']['Code'] == 'ResourceNotFoundException':
result = dynamodb.create_table(
TableName=TABLE_NAME,
KeySchema=[
{
'AttributeName': 'id',
'KeyType': 'HASH' # Partition key
}
],
AttributeDefinitions=[
{
'AttributeName': 'timestamp',
'AttributeType': 'D'
}
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
)
else:
print("Unknown exception occurred while querying for the " + TABLE_NAME + " table. Printing full error:")
pprint(ce.response)
My volumes path (/var/lib/docker/volumes/data) is owned by root, but that doesn't seem likely.

Sonarscanner cannot reach sonarqube server using docker-compose

I have just created my docker-compose file, trying to run sonarqube server along side posgres and sonarscanner. The sonarqube server and the database can connect however my sonarscanner cannot reach the sonarqube server.
This is my docker-compose file:
version: "3"
services:
sonarqube:
image: sonarqube
build: .
expose:
- 9000
ports:
- "127.0.0.1:9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://postgres:5432/sonar
- sonar.jdbc.username=sonar
- sonar.jdbc.password=sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
postgres:
image: postgres
build: .
networks:
- sonarnet
ports:
- "5432:5432"
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
sonarscanner:
image: newtmitch/sonar-scanner
networks:
- sonarnet
depends_on:
- sonarqube
volumes:
- ./:/usr/src
networks:
sonarnet:
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled-plugins:
postgresql:
postgresql_data:
This is my sonar-project.propeties file:
# must be unique in a given SonarQube instance
sonar.projectKey=toh-token
# --- optional properties ---
#defaults to project key
#sonar.projectName=toh
# defaults to 'not provided'
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Defaults to .
#sonar.sources=$HOME/.solo/angular/toh
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
My sonar-project.properties is located in the same directory as the docker-compose file.
This is what happens whenever I start the services:
Attaching to sonarqube-postgres-1, sonarqube-sonarqube-1, sonarqube-sonarscanner-1
sonarqube-sonarqube-1 | Dropping Privileges
sonarqube-postgres-1 |
sonarqube-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
sonarqube-postgres-1 |
sonarqube-postgres-1 | 2022-06-12 20:59:39.522 UTC [1] LOG: starting PostgreSQL 14.3 (Debian 14.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.523 UTC [1] LOG: listening on IPv6 address "::", port 5432
sonarqube-postgres-1 | 2022-06-12 20:59:39.525 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
sonarqube-postgres-1 | 2022-06-12 20:59:39.533 UTC [26] LOG: database system was shut down at 2022-06-12 20:57:58 UTC
sonarqube-postgres-1 | 2022-06-12 20:59:39.542 UTC [1] LOG: database system is ready to accept connections
sonarqube-sonarscanner-1 | INFO: Scanner configuration file: /usr/lib/sonar-scanner/conf/sonar-scanner.properties
sonarqube-sonarscanner-1 | INFO: Project root configuration file: /usr/src/sonar-project.properties
sonarqube-sonarscanner-1 | INFO: SonarScanner 4.5.0.2216
sonarqube-sonarscanner-1 | INFO: Java 12-ea Oracle Corporation (64-bit)
sonarqube-sonarscanner-1 | INFO: Linux 5.10.117-1-MANJARO amd64
sonarqube-sonarscanner-1 | INFO: User cache: /root/.sonar/cache
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
sonarqube-sonarqube-1 | 2022.06.12 20:59:40 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:41087]
sonarqube-sonarscanner-1 | ERROR: SonarQube server [http://sonarqube:9000] can not be reached
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: EXECUTION FAILURE
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | INFO: Total time: 0.802s
sonarqube-sonarscanner-1 | INFO: Final Memory: 3M/20M
sonarqube-sonarscanner-1 | INFO: ------------------------------------------------------------------------
sonarqube-sonarscanner-1 | ERROR: Error during SonarScanner execution
sonarqube-sonarscanner-1 | org.sonarsource.scanner.api.internal.ScannerException: Unable to execute SonarScanner analysis
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:85)
sonarqube-sonarscanner-1 | at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:74)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.createLauncher(IsolatedLauncherFactory.java:70)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.doStart(EmbeddedScanner.java:185)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.EmbeddedScanner.start(EmbeddedScanner.java:123)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.execute(Main.java:73)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.cli.Main.main(Main.java:61)
sonarqube-sonarscanner-1 | Caused by: java.lang.IllegalStateException: Fail to get bootstrap index from server
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:42)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:58)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
sonarqube-sonarscanner-1 | ... 7 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Failed to connect to sonarqube/172.30.0.2:9000
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:265)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connect(RealConnection.java:183)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.Transmitter.newExchange(Transmitter.java:169)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.getResponseWithInterceptorChain(RealCall.java:221)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.RealCall.execute(RealCall.java:81)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.callUrl(ServerConnection.java:114)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.ServerConnection.downloadString(ServerConnection.java:99)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:39)
sonarqube-sonarscanner-1 | ... 10 more
sonarqube-sonarscanner-1 | Caused by: java.net.ConnectException: Connection refused (Connection refused)
sonarqube-sonarscanner-1 | at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
sonarqube-sonarscanner-1 | at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
sonarqube-sonarscanner-1 | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
sonarqube-sonarscanner-1 | at java.base/java.net.Socket.connect(Socket.java:591)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.platform.Platform.connectSocket(Platform.java:130)
sonarqube-sonarscanner-1 | at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.RealConnection.connectSocket(RealConnection.java:263)
sonarqube-sonarscanner-1 | ... 31 more
sonarqube-sonarscanner-1 | ERROR:
sonarqube-sonarscanner-1 | ERROR: Re-run SonarScanner using the -X switch to enable full debug logging.
Is there something I am doing wrong?
As #Hans Killian said, the issue was with the scanner trying to connect to the server before the server was up and running. I fixed it by just adding the following in the service of the scanner:
command: ["sh", "-c", "sleep 60 && sonar-scanner && -Dsonar.projectBaseDir=/usr/src]. This allows the scanner to be suspended until the server is up and running
I then added the following credentials in the sonar.project.properties file:
sonar.login=admin
sonar.password=admin

Rabbitmq on docker: Application mnesia exited with reason: stopped

I'm trying to launch Rabbitmq with docker-compose alongside DRF and Celery.
Here's my docker-compose file. Everything else works fine, except for rabbitmq:
version: '3.7'
services:
drf:
build: ./drf
entrypoint: ["/bin/sh","-c"]
command:
- |
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
volumes:
- ./drf/:/usr/src/drf/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=base_test
redis:
image: redis:alpine
volumes:
- redis:/data
ports:
- "6379:6379"
depends_on:
- drf
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
volumes:
- ~/.docker-conf/rabbitmq/data/:/var/lib/rabbitmq/
- ~/.docker-conf/rabbitmq/log/:/var/log/rabbitmq
networks:
- net_1
celery_worker:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A base-test worker -l info"
depends_on:
- drf
- db
- redis
deploy:
replicas: 2
restart_policy:
condition: on-failure
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
hostname: celery_worker
image: app-image
networks:
- net_1
restart: on-failure
celery_beat:
command: sh -c "wait-for redis:3000 && wait-for drf:8000 -- celery -A mysite beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler"
depends_on:
- drf
- db
- redis
hostname: celery_beat
image: app-image
networks:
- net_1
restart: on-failure
networks:
net_1:
driver: bridge
volumes:
postgres_data:
redis:
And here's what happens when I launch it. Can someone please help me find the problem? I can't even follow the instruction and read the generated dump file because rabbitmq container exits after the error.
rabbitmq | Starting broker...2021-04-05 16:49:58.330 [info] <0.273.0>
rabbitmq | node : rabbit#0e652f57b1b3
rabbitmq | home dir : /var/lib/rabbitmq
rabbitmq | config file(s) : /etc/rabbitmq/rabbitmq.conf
rabbitmq | cookie hash : ZPam/SOKy2dEd/3yt0OlaA==
rabbitmq | log(s) : <stdout>
rabbitmq | database dir : /var/lib/rabbitmq/mnesia/rabbit#0e652f57b1b3
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: list of feature flags found:
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] empty_basic_get_metric
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] implicit_default_bindings
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [x] maintenance_mode_status
rabbitmq | 2021-04-05 16:50:09.542 [info] <0.273.0> Feature flags: [ ] quorum_queue
rabbitmq | 2021-04-05 16:50:09.543 [info] <0.273.0> Feature flags: [ ] user_limits
rabbitmq | 2021-04-05 16:50:09.545 [info] <0.273.0> Feature flags: [ ] virtual_host_metadata
rabbitmq | 2021-04-05 16:50:09.546 [info] <0.273.0> Feature flags: feature flag states written to disk: yes
rabbitmq | 2021-04-05 16:50:10.844 [info] <0.273.0> Running boot step pre_boot defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.845 [info] <0.273.0> Running boot step rabbit_core_metrics defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.846 [info] <0.273.0> Running boot step rabbit_alarm defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.854 [info] <0.414.0> Memory high watermark set to 2509 MiB (2631391641 bytes) of 6273 MiB (6578479104 bytes) total
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Enabling free disk space monitoring
rabbitmq | 2021-04-05 16:50:10.864 [info] <0.416.0> Disk free limit set to 50MB
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step code_server_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.273.0> Running boot step file_handle_cache defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.872 [info] <0.419.0> Limiting to approx 1048479 file handles (943629 sockets)
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC read buffering: OFF
rabbitmq | 2021-04-05 16:50:10.873 [info] <0.420.0> FHC write buffering: ON
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.273.0> Running boot step worker_pool defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Will use 4 processes for default worker pool
rabbitmq | 2021-04-05 16:50:10.874 [info] <0.372.0> Starting worker pool 'worker_pool' with 4 processes in it
rabbitmq | 2021-04-05 16:50:10.876 [info] <0.273.0> Running boot step database defined by app rabbit
rabbitmq | 2021-04-05 16:50:10.899 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq | 2021-04-05 16:50:10.900 [info] <0.273.0> Successfully synced tables from a peer
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq |
rabbitmq | 2021-04-05 16:50:10.908 [info] <0.44.0> Application mnesia exited with reason: stopped
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0>
rabbitmq | 2021-04-05 16:50:10.908 [error] <0.273.0> BOOT FAILED
rabbitmq | BOOT FAILED
rabbitmq | ===========
rabbitmq | Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> ===========
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> Error during startup: {error,
rabbitmq | 2021-04-05 16:50:10.909 [error] <0.273.0> {schema_integrity_check_failed,
rabbitmq | {schema_integrity_check_failed,
rabbitmq | [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [{table_attributes_mismatch,rabbit_queue,
rabbitmq | 2021-04-05 16:50:10.910 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.911 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options],
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> [name,durable,auto_delete,exclusive_owner,
rabbitmq | 2021-04-05 16:50:10.912 [error] <0.273.0> arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> recoverable_slaves,policy,operator_policy,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> gm_pids,decorators,state,policy_version,
rabbitmq | 2021-04-05 16:50:10.913 [error] <0.273.0> slave_pids_pending_shutdown,vhost,options,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options],
rabbitmq | [name,durable,auto_delete,exclusive_owner,
rabbitmq | arguments,pid,slave_pids,sync_slave_pids,
rabbitmq | recoverable_slaves,policy,operator_policy,
rabbitmq | gm_pids,decorators,state,policy_version,
rabbitmq | slave_pids_pending_shutdown,vhost,options,
rabbitmq | type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.914 [error] <0.273.0> type,type_state]}]}}
rabbitmq | 2021-04-05 16:50:10.916 [error] <0.273.0>
rabbitmq |
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.272.0> [{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.272.0>},{registered_name,[]},{error_info
,{exit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_
pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,138}]},{proc_l
ib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}},{ancestors,[<0.271.0>]},{message_queue_len,1},{messages,[{'EXIT',<0.273.0>,normal}]},{links,[<0.271.0>,<0.44.0>]},{dictionary,[]},{trap_exit,true},{
status,running},{heap_size,610},{stack_size,28},{reductions,534}], []
rabbitmq | 2021-04-05 16:50:11.924 [error] <0.272.0> CRASH REPORT Process <0.272.0> with 0 neighbours exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name
,durable,auto_delete,exclusive_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,t
ype_state]}]},...} in application_master:init/4 line 138
rabbitmq | 2021-04-05 16:50:11.924 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | 2021-04-05 16:50:11.925 [info] <0.44.0> Application rabbit exited with reason: {{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusive_o
wner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},...}
rabbitmq | {"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options],[name,durable,auto_delete,exclusi
ve_owner,arguments,pid,slave_pids,sync_slave_pids,recoverable_slaves,policy,operator_policy,gm_pids,decorators,state,policy_version,slave_pids_pending_shutdown,vhost,options,type,type_state]}]},{rabbit,start,
[normal,[]]}}}"}
rabbitmq | Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{schema_integrity_check_failed,[{table_attributes_mismatch,rabbit_queue,[name,durable,auto_delete,exclusiv
e_owner,arg
rabbitmq |
rabbitmq | Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
rabbitmq exited with code 0
I've managed to make it work by removing container_name and volumes from rabbitmq section of docker-compose file. Still would be nice to have an explanation of this behavior.

Setting Docker for laravel app I got errors in "compose/cli/main.py

In my Kubuntu 18.04 I try to run docker for my Laravel application
$ docker --version
Docker version 17.12.1-ce, build 7390fc6
I have 3 files:
.env:
# PATHS
DB_PATH_HOST=./databases
APP_PATH_HOST=./votes
APP_PTH_CONTAINER=/var/www/html/
docker-compose.yml:
version: '3'
services:
web:
build: ./web/Dockerfile.yml
environment:
- APACHE_RUN_USER=www-data
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8080:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install
/web/Dockerfile.yml:
FROM php:7.2-apache
RUN docker-php-ext-install \
pdo_mysql \
&& a2enmod \
rewrite
When I try to use docker-compose up --build, I get the following:
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose up --build
Building web
Traceback (most recent call last):
File "bin/docker-compose", line 6, in <module>
File "compose/cli/main.py", line 71, in main
File "compose/cli/main.py", line 127, in perform_command
File "compose/cli/main.py", line 1052, in up
File "compose/cli/main.py", line 1048, in up
File "compose/project.py", line 466, in up
File "compose/service.py", line 329, in ensure_image_exists
File "compose/service.py", line 1047, in build
File "site-packages/docker/api/build.py", line 142, in build
TypeError: You must specify a directory to build in path
[6769] Failed to execute script docker-compose
I know that *.py that is python language files, but I do not use python language or work with it, I work with PHP.
Why is an error and how to fix it?
MODIFIED:
$ docker-compose up --build
Building webapp
Step 1/2 : FROM php:7.2-apache
---> a7d68dad7584
Step 2/2 : RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
---> Using cache
---> 519d1b33af81
Successfully built 519d1b33af81
Successfully tagged votes_docker_webapp:latest
Starting votes_docker_adminer_1 ...
Starting votes_docker_composer_1 ...
Starting votes_docker_adminer_1 ... error
votes_docker_db_1 is up-to-date
ERROR: for votes_docker_adminer_1 Cannot start service adminer: driver failed programming external connectivity on endpoint votes_docker_adminer_1 (6e94693ab8b1a990aaa83164df0952e8665f351618a72aStarting votes_docker_composer_1 ... done
ERROR: for adminer Cannot start service adminer: driver failed programming external connectivity on endpoint votes_docker_adminer_1 (6e94693ab8b1a990aaa83164df0952e8665f351618a72a5531f9c3ccc18a2e3d): Bind for 0.0.0.0:8080 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
I tried to check related ports and got:
# sudo netstat -ntpl | grep 8080:8080
# sudo netstat -ntpl | grep 0.0.0.0:8080
# sudo netstat -ntpl | grep 8080
tcp6 0 0 :::8080 :::* LISTEN 7361/docker-proxy
MODIFIED #2:
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose up --build
Creating network "votes_docker_default" with the default driver
Building webapp
Step 1/2 : FROM php:7.2-apache
---> a7d68dad7584
Step 2/2 : RUN docker-php-ext-install pdo_mysql && a2enmod rewrite
---> Using cache
---> 519d1b33af81
Successfully built 519d1b33af81
Successfully tagged votes_docker_webapp:latest
Creating votes_docker_adminer_1 ... done
Creating votes_docker_composer_1 ... done
Creating votes_docker_webapp_1 ... done
Creating votes_docker_db_1 ... done
Attaching to votes_docker_adminer_1, votes_docker_composer_1, votes_docker_webapp_1, votes_docker_db_1
adminer_1 | PHP 7.2.10 Development Server started at Mon Oct 15 10:14:02 2018
composer_1 | Composer could not find a composer.json file in /var/www/html
composer_1 | To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
votes_docker_composer_1 exited with code 1
webapp_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
webapp_1 | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.4. Set the 'ServerName' directive globally to suppress this message
webapp_1 | [Mon Oct 15 10:14:05.281793 2018] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.25 (Debian) PHP/7.2.10 configured -- resuming normal operations
webapp_1 | [Mon Oct 15 10:14:05.281843 2018] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
db_1 | 2018-10-15T10:14:06.541323Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
db_1 | 2018-10-15T10:14:06.541484Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.11) starting as process 1
db_1 | mbind: Operation not permitted
db_1 | mbind: Operation not permitted
db_1 | mbind: Operation not permitted
db_1 | mbind: Operation not permitted
db_1 | 2018-10-15T10:14:07.062202Z 0 [Warning] [MY-011071] [Server] World-writable config file './auto.cnf' is ignored.
db_1 | 2018-10-15T10:14:07.062581Z 0 [Warning] [MY-010107] [Server] World-writable config file './auto.cnf' has been removed.
db_1 | 2018-10-15T10:14:07.063146Z 0 [Warning] [MY-010075] [Server] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 0cd8212e-d063-11e8-8e69-0242ac140005.
db_1 | 2018-10-15T10:14:07.079020Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
db_1 | 2018-10-15T10:14:07.091951Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
db_1 | 2018-10-15T10:14:07.103829Z 0 [Warning] [MY-010315] [Server] 'user' entry 'mysql.infoschema#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.103896Z 0 [Warning] [MY-010315] [Server] 'user' entry 'mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.103925Z 0 [Warning] [MY-010315] [Server] 'user' entry 'mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.103947Z 0 [Warning] [MY-010315] [Server] 'user' entry 'root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.104006Z 0 [Warning] [MY-010323] [Server] 'db' entry 'performance_schema mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.104034Z 0 [Warning] [MY-010323] [Server] 'db' entry 'sys mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.104070Z 0 [Warning] [MY-010311] [Server] 'proxies_priv' entry '# root#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.112700Z 0 [Warning] [MY-010330] [Server] 'tables_priv' entry 'user mysql.session#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.112738Z 0 [Warning] [MY-010330] [Server] 'tables_priv' entry 'sys_config mysql.sys#localhost' ignored in --skip-name-resolve mode.
db_1 | 2018-10-15T10:14:07.117764Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.11' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
Is it mysql misconfigure?
I run it not as root.
Also as I use LAMP do I need to stop apache and mysql before running docker-compose command?
MODIFIED #3:
After some searching I added mysql version in my config file and added command option:
image: mysql:5.7.23
command: --default-authentication-plugin=mysql_native_password --disable-partition-engine-check
and the above error was fixed.
So:
1. In other console under root I run commands (but I'm still not sure if I need it?)
sudo service apache2 stop
sudo service mysql stop
2. Under nonroot console I run with key to run in background:
docker-compose up -d
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker-compose down
Stopping votes_docker_db_1 ... done
Stopping votes_docker_webapp_1 ... done
Stopping votes_docker_adminer_1 ... done
Removing votes_docker_db_1 ... done
Removing votes_docker_webapp_1 ... done
Removing votes_docker_composer_1 ... done
Removing votes_docker_adminer_1 ... done
Removing network votes_docker_default
docker-compose up -d
Creating network "votes_docker_default" with the default driver
Creating votes_docker_webapp_1 ... done
Creating votes_docker_adminer_1 ... done
Creating votes_docker_db_1 ... done
Creating votes_docker_composer_1 ... done
I have no errors in output, but I expected as a result to have vendor directory in my project, as I have in web/Dockerfile.yml:
FROM php:7.2-apache
RUN docker-php-ext-install \
pdo_mysql \
&& a2enmod \
rewrite
But I do not see this directory...
Was the installation successful or not?
I'm not sure where to move next?
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker info
Containers: 33
Running: 2
Paused: 0
Stopped: 31
Images: 19
Server Version: 17.12.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9b55aab90508bd389d7654c4baf173a981477d55
runc version: 9f9c96235cc97674e935002fc3d78361b696a69e
init version: v0.13.0 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-36-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.711GiB
Name: serge
ID: BDNU:HFWX:N6YV:IWYW:HJSU:SZ23:URPB:3FR2:7I3E:IFGK:AOLH:YRE5
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
votes_docker_webapp latest 519d1b33af81 27 hours ago 378MB
adminer latest 0038b45402de 4 weeks ago 81.7MB
composer 1.6 e28b5b53ab28 4 weeks ago 154MB
php 7.2-apache a7d68dad7584 4 weeks ago 378MB
mysql 5.7.23 563a026a1511 5 weeks ago 372MB
mysql 5.7.22 6bb891430fb6 2 months ago 372MB
test2_php latest 05534d47f926 3 months ago 84.7MB
test1_php latest 05534d47f926 3 months ago 84.7MB
<none> <none> 6060fcf4d103 3 months ago 81MB
php fpm-alpine 601d5b3a95d4 3 months ago 80.6MB
php apache d9faf33e6e40 3 months ago 377MB
mysql latest 8d99edb9fd40 3 months ago 445MB
php 7-fpm 854ffd8dc9d8 3 months ago 367MB
php 7.2 e86d9bb526ef 3 months ago 367MB
ukfx/php apache-stretch 5958cb7c2316 4 months ago 648MB
nginx alpine bc7fdec94612 4 months ago 18MB
hello-world latest e38bc07ac18e 6 months ago 1.85kB
composer/composer latest 5afb0951f2a4 2 years ago 636MB
serge#serge:/mnt/_work_sdb8/wwwroot/lar/DockerApps/votes_docker$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f8beea5dceca mysql:5.7.23 "docker-entrypoint.s…" 6 minutes ago Restarting (2) 6 seconds ago votes_docker_db_1
8309b5456dcf adminer "entrypoint.sh docke…" 6 minutes ago Up 6 minutes 0.0.0.0:8081->8080/tcp votes_docker_adminer_1
cc644206931b votes_docker_webapp "docker-php-entrypoi…" 6 minutes ago Up 6 minutes 0.0.0.0:8080->80/tcp votes_docker_webapp_1
How to fix it?
Thanks!
According to docker-compose documentation, build can be specified as a string containing a path to the build context if you have Dockerfile inside it.
You are using Dockerfile.yml file, which is not a default one (Dockerfile), so in this case context and dockerfile should be specified as well:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
The final docker-compose.yaml is:
version: '3'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=www-data
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8080:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install
Addition to MODIFIED part:
Both web and adminer are configured to be allocated on port 8080 on a host system. That's why you have a conflict here. To resolve above issue you need to bind adminer to another port 8081 for example.
The final docker-compose.yaml is:
version: '3'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=www-data
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8080:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: 1
volumes:
- ${DB_PATH_HOST}:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8081:8080
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install
Addition to MODIFIED3 part:
I see the following error with your composer docker container:
composer_1 | Composer could not find a composer.json file in /var/www/html
composer_1 | To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
Is it because you accidentally put ${DB_PATH_HOST} instead of ${APP_PATH_HOST} in the composer config?
composer:
image: composer:1.6
volumes:
- ${DB_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install

Error while using docker-compose and nodemon

I am using docker-compose and nodemon for my dev.
my fs looks like this:
├── code
│   ├── images
│   │   ├── api
│   │   ├── db
│   └── topology
│   └── docker-compose.yml
Normally when I run docker-compose up --build, files are copied from my local computer to containers. Since I am in dev mode,
I don't want to run docker-compose up --build every time, that is why I am using volume to share a directory between my local computer and container.
I make some few research and this is what I come out with:
API, Dockerfile:
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install nodemon -g --save
RUN npm install
CMD [ "nodemon", "app.js" ]
DB, Dockerfile:
FROM mongo:3.2-jessie
docker-compose.yml
version: '2'
services:
api:
build: ../images/api
volumes:
- .:/usr/src/app
ports:
- "7000:7000"
links: ["db"]
db:
build: ../images/db
ports:
- "27017:27017"
The problem is that, when I run docker-compose up --build
I have this error:
---> 327987c38250
Removing intermediate container f7b46029825f
Step 7/7 : CMD nodemon app.js
---> Running in d8430d03bcd2
---> ee5de77d78eb
Removing intermediate container d8430d03bcd2
Successfully built ee5de77d78eb
Recreating topology_db_1
Recreating topology_api_1
Attaching to topology_db_1, topology_api_1
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=5b93871d0f4f
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] db version v3.2.21
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] git version: 1ab1010737145ba3761318508ff65ba74dfe8155
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1t 3 May 2016
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] allocator: tcmalloc
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] modules: none
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] build environment:
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] distmod: debian81
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] distarch: x86_64
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] target_arch: x86_64
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] options: {}
db_1 | 2018-09-22T10:08:41.686+0000 I - [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
db_1 | 2018-09-22T10:08:41.687+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
db_1 | 2018-09-22T10:08:41.905+0000 I STORAGE [initandlisten] WiredTiger [1537610921:904991][1:0x7fdf57debcc0], txn-recover: Main recovery loop: starting at 89/4096
db_1 | 2018-09-22T10:08:41.952+0000 I STORAGE [initandlisten] WiredTiger [1537610921:952261][1:0x7fdf57debcc0], txn-recover: Recovering log 89 through 90
db_1 | 2018-09-22T10:08:41.957+0000 I STORAGE [initandlisten] WiredTiger [1537610921:957000][1:0x7fdf57debcc0], txn-recover: Recovering log 90 through 90
db_1 | 2018-09-22T10:08:42.148+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
db_1 | 2018-09-22T10:08:42.148+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
db_1 | 2018-09-22T10:08:42.148+0000 I NETWORK [initandlisten] waiting for connections on port 27017
api_1 | Usage: nodemon [nodemon options] [script.js] [args]
api_1 |
api_1 | See "nodemon --help" for more.
api_1 |
topology_api_1 exited with code 0
If I comment:
volumes:
-.:/usr/src/app
It compiles and run correctly.
Can someone help find what is what in my approach ?
Thanks
docker-compose.yml and api dockerFile are in different directory?“volume” instruction in compose file overwrite the ”copy” instruction。they have different context。

Resources