Same as here. docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_app
ports:
- '5432:5432'
chrome:
image: selenium/standalone-chrome
hostname: chrome
privileged: true
shm_size: 2g
web:
build: .
image: my-app
ports:
- "8000:8000"
depends_on:
- db
command: sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
environment:
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_HOST=db
- DB_NAME=my_app
everything starts as expected
% docker compose build && docker compose up
[+] Building 3.4s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 189B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/python:3.11.1-bullseye 3.1s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 8.58kB 0.0s
=> [1/5] FROM docker.io/library/python:3.11.1-bullseye#sha256:cc4910af48 0.0s
=> CACHED [2/5] COPY requirements.txt requirements.txt 0.0s
=> CACHED [3/5] RUN pip install -r requirements.txt 0.0s
=> CACHED [4/5] COPY . /app 0.0s
=> CACHED [5/5] WORKDIR /app 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:10e58eeb73e4651e1e1aedb921fdde3b389cadc204787 0.0s
=> => naming to docker.io/library/my-app 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 3/3
⠿ Container my_app-db-1 Created 0.0s
⠿ Container my_app-chrome-1 Created 0.0s
⠿ Container my_app-web-1 Recreated 0.2s
Attaching to my_app-chrome-1, my_app-db-1, my_app-web-1
my_app-db-1 |
my_app-db-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
my_app-db-1 |
my_app-db-1 | 2023-01-08 14:01:28.671 UTC [1] LOG: starting PostgreSQL 15.1 (Debian 15.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
my_app-db-1 | 2023-01-08 14:01:28.672 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
my_app-db-1 | 2023-01-08 14:01:28.673 UTC [1] LOG: listening on IPv6 address "::", port 5432
my_app-db-1 | 2023-01-08 14:01:28.688 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
my_app-db-1 | 2023-01-08 14:01:28.699 UTC [28] LOG: database system was shut down at 2023-01-08 13:54:14 UTC
my_app-db-1 | 2023-01-08 14:01:28.720 UTC [1] LOG: database system is ready to accept connections
my_app-chrome-1 | 2023-01-08 14:01:28,791 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
my_app-chrome-1 | 2023-01-08 14:01:28,803 INFO RPC interface 'supervisor' initialized
my_app-chrome-1 | 2023-01-08 14:01:28,803 CRIT Server 'unix_http_server' running without any HTTP authentication checking
my_app-chrome-1 | 2023-01-08 14:01:28,808 INFO supervisord started with pid 8
my_app-chrome-1 | 2023-01-08 14:01:29,811 INFO spawned: 'xvfb' with pid 10
my_app-chrome-1 | 2023-01-08 14:01:29,819 INFO spawned: 'vnc' with pid 11
my_app-chrome-1 | 2023-01-08 14:01:29,833 INFO spawned: 'novnc' with pid 12
my_app-chrome-1 | 2023-01-08 14:01:29,849 INFO spawned: 'selenium-standalone' with pid 14
my_app-chrome-1 | Setting up SE_NODE_GRID_URL...
my_app-chrome-1 | 2023-01-08 14:01:29,911 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | 2023-01-08 14:01:29,913 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | 2023-01-08 14:01:29,913 INFO success: novnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | 2023-01-08 14:01:29,914 INFO success: selenium-standalone entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
my_app-chrome-1 | Selenium Grid Standalone configuration:
my_app-chrome-1 | [network]
my_app-chrome-1 | relax-checks = true
my_app-chrome-1 |
my_app-chrome-1 | [node]
my_app-chrome-1 | session-timeout = "300"
my_app-chrome-1 | override-max-sessions = false
my_app-chrome-1 | detect-drivers = false
my_app-chrome-1 | drain-after-session-count = 0
my_app-chrome-1 | max-sessions = 1
my_app-chrome-1 |
my_app-chrome-1 | [[node.driver-configuration]]
my_app-chrome-1 | display-name = "chrome"
my_app-chrome-1 | stereotype = '{"browserName": "chrome", "browserVersion": "108.0", "platformName": "Linux"}'
my_app-chrome-1 | max-sessions = 1
my_app-chrome-1 |
my_app-chrome-1 | Starting Selenium Grid Standalone...
my_app-chrome-1 | Tracing is disabled
my_app-web-1 | Operations to perform:
my_app-web-1 | Apply all migrations: admin, auth, contenttypes, core, sessions
my_app-web-1 | Running migrations:
my_app-web-1 | No migrations to apply.
my_app-web-1 | Watching for file changes with StatReloader
my_app-web-1 | Performing system checks...
my_app-web-1 |
my_app-web-1 | System check identified no issues (0 silenced).
my_app-web-1 | January 08, 2023 - 14:01:33
my_app-web-1 | Django version 4.1.5, using settings 'my_app.settings'
my_app-web-1 | Starting development server at http://0.0.0.0:8000/
my_app-web-1 | Quit the server with CONTROL-C.
my_app-chrome-1 | 14:01:33.430 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding
my_app-chrome-1 | 14:01:33.453 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing
my_app-chrome-1 | 14:01:35.487 INFO [NodeOptions.getSessionFactories] - Detected 2 available processors
my_app-chrome-1 | 14:01:35.608 INFO [NodeOptions.report] - Adding chrome for {"browserVersion": "108.0","se:noVncPort": 7900,"browserName": "chrome","platformName": "LINUX","se:vncEnabled": true} 1 times
my_app-chrome-1 | 14:01:35.649 INFO [Node.<init>] - Binding additional locator mechanisms: name, relative, id
my_app-chrome-1 | 14:01:35.709 INFO [GridModel.setAvailability] - Switching Node 9f76899f-6574-4e21-9413-d6141aa6c584 (uri: http://172.25.0.2:4444) from DOWN to UP
my_app-chrome-1 | 14:01:35.709 INFO [LocalDistributor.add] - Added node 9f76899f-6574-4e21-9413-d6141aa6c584 at http://172.25.0.2:4444. Health check every 120s
my_app-chrome-1 | 14:01:36.147 INFO [Standalone.execute] - Started Selenium Standalone 4.7.2 (revision 4d4020c3b7): http://172.25.0.2:4444
The logs indicate selenium is currently available at http://172.25.0.2:4444, so I try:
>>> from selenium.webdriver import ChromeOptions, Remote
>>> options = ChromeOptions()
>>> options.add_argument('--headless')
>>> driver = Remote('http://172.25.0.2:4444')
It keeps hanging forever and no special output / log messages / anything further happens, it just keeps hanging until ^c. So how exactly is this supposed to be used? Also for some reason, if the ip address is replaced with http://chrome:4444, the connection is refused.
Related
I am running node-red in docker compose and from the gitlab-cli file I am calling docker/compose image and my pipeline is working and I can see this:
node-red | 11 Nov 11:28:51 - [info]
462node-red |
463node-red | Welcome to Node-RED
464node-red | ===================
465node-red |
466node-red | 11 Nov 11:28:51 - [info] Node-RED version: v3.0.2
467node-red | 11 Nov 11:28:51 - [info] Node.js version: v16.16.0
468node-red | 11 Nov 11:28:51 - [info] Linux 5.15.49-linuxkit x64 LE
469node-red | 11 Nov 11:28:52 - [info] Loading palette nodes
470node-red | 11 Nov 11:28:53 - [info] Settings file : /data/settings.js
471node-red | 11 Nov 11:28:53 - [info] Context store : 'default' [module=memory]
472node-red | 11 Nov 11:28:53 - [info] User directory : /data
473node-red | 11 Nov 11:28:53 - [warn] Projects disabled : editorTheme.projects.enabled=false
474node-red | 11 Nov 11:28:53 - [info] Flows file : /data/flows.json
475node-red | 11 Nov 11:28:53 - [warn]
476node-red |
477node-red | ---------------------------------------------------------------------
478node-red | Your flow credentials file is encrypted using a system-generated key.
479node-red |
480node-red | If the system-generated key is lost for any reason, your credentials
481node-red | file will not be recoverable, you will have to delete it and re-enter
482node-red | your credentials.
483node-red |
484node-red | You should set your own key using the 'credentialSecret' option in
485node-red | your settings file. Node-RED will then re-encrypt your credentials
486node-red | file using your chosen key the next time you deploy a change.
487node-red | ---------------------------------------------------------------------
488node-red |
489node-red | 11 Nov 11:28:53 - [info] Server now running at http://127.0.0.1:1880/
490node-red | 11 Nov 11:28:53 - [warn] Encrypted credentials not found
491node-red | 11 Nov 11:28:53 - [info] Starting flows
492node-red | 11 Nov 11:28:53 - [info] Started flows
but when I am trying to open the localhost server to access the node-red or the dashboard, I am getting the error "Failed to open page"
This is my docker-compose.yml
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
user: '1000'
container_name: node-red
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
This is my .gitlab-cli.yml
yateen-docker:
stage: build
image:
name: docker/compose
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker-compose up
only:
- main
Any help!
I tried to create the node-red docker via docker-compose not just by running docker run command. Though my node-red image is running but I can't access the server page
I have port forwarded applications mysql port to 3307 because I need my host mysql to keep running at 3306, but it gives below error.
Also I am able to get welcome page after running sail up
I am using laravel 9 latest version
Error
Illuminate\Database\QueryException
PHP 8.1.9
9.26.1
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo for mysql failed: Temporary failure in name resolution
SELECT count(*) AS aggregate FROM `users` WHERE `email` = test#test.com
.env
APP_URL=http://127.0.0.1
APP_PORT=81
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
FORWARD_DB_PORT=3307
docker-composer.yml
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-81}:80'
- '${VITE_PORT:-5174}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'
environment:
MYSQL_ROOT_PASSWORD: '{DB_PASSWORD}'
MYSQL_ROOT_HOST: '{DB_HOST}'
MYSQL_DATABASE: '{DB_DATABASE}'
MYSQL_USER: '{DB_USERNAME}'
MYSQL_PASSWORD: '{DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local
Update 1
My terminal ouput is as follows
sm_v2-laravel.test-1 "start-container" laravel.test exited (0)
Shutting down old Sail processes...
[+] Running 0/1
⠙ Network sm_v2_sail Creating 0.2s
[+] Running 3/3d orphan containers ([sm_v2-service-1]) for this project. If you removed or renamed this service in your co ⠿ Network sm_v2_sail Created 0.2s
⠿ Container sm_v2-mysql-1 Created 1.5s
⠿ Container sm_v2-laravel.test-1 Created 0.5s
Attaching to sm_v2-laravel.test-1, sm_v2-mysql-1
sm_v2-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.30-1.2.9-server
sm_v2-mysql-1 | [Entrypoint] Starting MySQL 8.0.30-1.2.9-server
sm_v2-mysql-1 | 2022-08-30T15:19:04.087084Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
sm_v2-mysql-1 | 2022-08-30T15:19:04.092964Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30) starting as process 1
sm_v2-mysql-1 | 2022-08-30T15:19:04.148193Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
sm_v2-mysql-1 | 2022-08-30T15:19:04.303213Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755173Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
sm_v2-mysql-1 | 2022-08-30T15:19:04.755609Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
sm_v2-mysql-1 | 2022-08-30T15:19:04.755681Z 0 [ERROR] [MY-010119] [Server] Aborting
sm_v2-mysql-1 | 2022-08-30T15:19:04.757223Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.30) MySQL Community Server - GPL.
sm_v2-mysql-1 exited with code 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,746 INFO Set uid to user 0 succeeded
sm_v2-laravel.test-1 | 2022-08-30 15:19:07,751 INFO supervisord started with pid 1
sm_v2-laravel.test-1 | 2022-08-30 15:19:08,756 INFO spawned: 'php' with pid 16
sm_v2-laravel.test-1 | 2022-08-30 15:19:09,759 INFO success: php entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | INFO Server running on [http://0.0.0.0:80].
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | Press Ctrl+C to stop the server
sm_v2-laravel.test-1 |
sm_v2-laravel.test-1 | 2022-08-30 15:19:21 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:23 ................................................... ~ 1s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ac81e540.css .................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /build/assets/app.ab93cf8a.js ..................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:24 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:27 ................................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 15:19:29 /favicon.ico ...................................... ~ 0s
sm_v2-laravel.test-1 | 2022-08-30 16:07:14 ................................................... ~ 0s
Update 2
I get different error now
SQLSTATE[HY000] [1045] Access denied for user 'root'#'192.168.128.3' (using password: YES)
I finally solved it after mental frustation of more than a week. But it is very strange that no one was able to provide any answer in any forums, yes I tried all the famous forums possible.
I made sure that two users are added on my host(main computer) machine not the docker mysql, and I granted them full grant using mysql cli, there were 2 entries like these along with other entries
root | %
root | localhost
I ran following commands one after another. I don't know which commands exactly solved the problem as I am a beginner in docker and sail but here are my steps that I tried after which it started working.
I was getting Docker is not running. , so I tried following to make docker running.
sudo systemctl enable docker.service
sudo systemctl enable docker.socket
After that I tried sail up but it did not work, so ran following
sudo systemctl stop docker
sudo systemctl start docker
sudo systemctl disable docker.service
sudo systemctl enable docker.service
sail up
After that I rebooted my computer (I am on Ubuntu 22.04)
reboot
Removed some unnecessary files, also I got some failed error in docker service which I solved by running line 2&3 of the code below
sudo rm /etc/systemd/system/docker.service.d/override.conf
sudo systemctl reset-failed docker.service
sudo systemctl start docker.service
systemctl daemon-reload
sudo systemctl start docker.service
sail down
sail build --no-cache
sail up
php artisan config:clear
After that I migrated database and it worked
sail artisan migrate
After that
sudo systemctl enable docker
sail up
sail build
sail ps
sudo usermod -aG docker ${USER}
Removed daemon.json
sudo rm daemon.json
Removed old volumes
I think this was helpful
sail down --rmi all -v
sail up / (you can use sail up --no-cache)
Now mysql works on host computer port 3306 as well as other ports used for docker 3307,3308 simultaneously
I Appreciate #Mihai effort becoz only #Mihai responded in the comments
Update 2
I had to add platform: 'linux/x86_64' in docker-compose.yml file
mysql:
image: 'mysql/mysql-server:8.0'
platform: 'linux/x86_64'
ports:
- '${FORWARD_DB_PORT:-3307}:3306'
Who can help to deal with Docker Static Analysis With Clair?
I get an error when analyzing help me figure it out or tell me how to install the Docker Clair scanner correctly?
Getting Setup
git clone git#github.com:Charlie-belmer/Docker-security-example.git
docker-compose.yml
version: '2.1'
services:
postgres:
image: postgres:12.1
restart: unless-stopped
volumes:
- ./docker-compose-data/postgres-data/:/var/lib/postgresql/data:rw
environment:
- POSTGRES_PASSWORD=ChangeMe
- POSTGRES_USER=clair
- POSTGRES_DB=clair
clair:
image: quay.io/coreos/clair:v4.3.4
restart: unless-stopped
volumes:
- ./docker-compose-data/clair-config/:/config/:ro
- ./docker-compose-data/clair-tmp/:/tmp/:rw
depends_on:
postgres:
condition: service_started
command: [--log-level=debug, --config, /config/config.yml]
user: root
clairctl:
image: jgsqware/clairctl:latest
restart: unless-stopped
environment:
- DOCKER_API_VERSION=1.41
volumes:
- ./docker-compose-data/clairctl-reports/:/reports/:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
clair:
condition: service_started
user: root
docker-compose up
The server starts without errors but gets stuck on the same message
I don't understand what he doesn't like
test#parallels-virtual-platform:~/Docker-security-example/clair$ docker-compose up
clair_postgres_1 is up-to-date
Recreating clair_clair_1 ... done
Recreating clair_clairctl_1 ... done
Attaching to clair_postgres_1, clair_clair_1, clair_clairctl_1
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 22:55:36.853 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 22:55:36.877 UTC [24] LOG: database system was shut down at 2021-11-16 22:54:58 UTC
postgres_1 | 2021-11-16 22:55:36.888 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:01:15.219 UTC [1] LOG: received smart shutdown request
postgres_1 | 2021-11-16 23:01:15.225 UTC [1] LOG: background worker "logical replication launcher" (PID 30) exited with exit code 1
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 23:02:11.993 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 23:02:11.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 23:02:12.009 UTC [26] LOG: database system was interrupted; last known up at 2021-11-16 23:00:37 UTC
postgres_1 | 2021-11-16 23:02:12.164 UTC [26] LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo starts at 0/1745C50
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: invalid record length at 0/1745D38: wanted 24, got 0
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo done at 0/1745D00
postgres_1 | 2021-11-16 23:02:12.180 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] ERROR: duplicate key value violates unique constraint "lock_name_key"
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] DETAIL: Key (name)=(updater) already exists.
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] STATEMENT: INSERT INTO Lock(name, owner, until) VALUES($1, $2, $3)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
installing a bad container
docker pull imiell/bad-dockerfile
docker-compose exec clairctl clairctl analyze -l imiell/bad-dockerfile
client quit unexpectedly
2021-11-16 23:05:19.221606 C | cmd: pushing image "imiell/bad-dockerfile:latest": pushing layer to clair: Post http://clair:6060/v1/layers: dial tcp: lookup clair: Try again
I don't understand what he doesn't like for analysis?
I just solved this yesterday, the 4.3.4 version of Clair only supports two command-line options, mode, and conf. Your output bears this out:
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
Change the command line to only specify your configuration file (line 23 of your docker-compose.yml) and place your debug directive in the configuration file.
command: [--conf, /config/config.yml]
This should get Clair running.
I think your are using the old clairctl with the new Clair v4. You should be using clairctl from here: https://github.com/quay/clair/releases/tag/v4.3.5.
The console logs /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/tables during the build of the Docker container (see full log below). What could be the case for this considering I have the following code:
File overview
users.sql
BEGIN TRANSACTION;
CREATE TABLE users (
id serial PRIMARY KEY,
name VARCHAR(100),
email text UNIQUE NOT NULL,
entries BEGINT DEFAULT 0,
joined TIMESTAMP NOT NULL
);
COMMIT;
deploy_schemas.sql
-- Deploy fresh database tables
\i '/docker-entrypoint-initdb.d/tables/users.sql'
\i '/docker-entrypoint-initdb.d/tables/login.sql'
Dockerfile (in postgres folder)
FROM postgres:12.2
ADD /tables/ /docker-entrypoint-initdb.d/tables/
ADD deploy_schemas.sql /docker-entrypoint-initdb.d/tables/
**docker-compose.yml**
version: "3.3"
services:
# Backend API
smart-brain-app:
container_name: backend
# image: mode:14.2.0
build: ./
command: npm start
working_dir: /usr/src/smart-brain-api
environment:
POSTGRES_URI: postgres://postgres:1212#postgres:5431/smart-brain-api-db
links:
- postgres
ports:
- "3000:3000"
volumes:
- ./:/usr/src/smart-brain-api
# Postgres
postgres:
build: ./postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 1212
POSTGRES_DB: smart-brain-api-db
POSTGRES_HOST: postgres
ports:
- "5431:5432"
Dockerfile
FROM node:14.2.0
WORKDIR /usr/src/smart-brain-api
COPY ./ ./
RUN npm install | npm audit fix
CMD ["/bin/bash"]
Complete Log
Creating smart-brain-api_postgres_1 ... done
Creating backend ... done
Attaching to smart-brain-api_postgres_1, backend
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set
to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
postgres_1 |
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
postgres_1 | selecting dynamic shared memory implementation ... posix
postgres_1 | selecting default max_connections ... 100
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting default time zone ... Etc/UTC
postgres_1 | creating configuration files ... ok
postgres_1 | running bootstrap script ... ok
backend |
backend | > node#1.0.0 start /usr/src/smart-brain-api
backend | > npx nodemon server.js
backend |
postgres_1 | performing post-bootstrap initialization ... ok
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 | initdb: warning: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile
start
postgres_1 |
postgres_1 | waiting for server to start....2020-05-10 01:31:31.548 UTC [46] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1)
on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-10 01:31:31.549 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-05-10 01:31:31.565 UTC [47] LOG: database system was shut down at 2020-05-10 01:31:31 UTC
postgres_1 | 2020-05-10 01:31:31.569 UTC [46] LOG: database system is ready to accept connections
postgres_1 | done
postgres_1 | server started
postgres_1 | CREATE DATABASE
postgres_1 |
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/tables
postgres_1 |
postgres_1 | 2020-05-10 01:31:31.772 UTC [46] LOG: received fast shutdown request
postgres_1 | waiting for server to shut down....2020-05-10 01:31:31.774 UTC [46] LOG: aborting any active transactions
postgres_1 | 2020-05-10 01:31:31.775 UTC [46] LOG: background
worker "logical replication launcher" (PID 53) exited with exit code 1
postgres_1 | 2020-05-10 01:31:31.778 UTC [48] LOG: shutting down
postgres_1 | 2020-05-10 01:31:31.791 UTC [46] LOG: database system is shut down
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start
up.
postgres_1 |
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-05-10 01:31:31.894 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-05-10 01:31:31.910 UTC [64] LOG: database system was shut down at 2020-05-10 01:31:31 UTC
postgres_1 | 2020-05-10 01:31:31.914 UTC [1] LOG: database system is ready to accept connections
I am using docker-compose and nodemon for my dev.
my fs looks like this:
├── code
│ ├── images
│ │ ├── api
│ │ ├── db
│ └── topology
│ └── docker-compose.yml
Normally when I run docker-compose up --build, files are copied from my local computer to containers. Since I am in dev mode,
I don't want to run docker-compose up --build every time, that is why I am using volume to share a directory between my local computer and container.
I make some few research and this is what I come out with:
API, Dockerfile:
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install nodemon -g --save
RUN npm install
CMD [ "nodemon", "app.js" ]
DB, Dockerfile:
FROM mongo:3.2-jessie
docker-compose.yml
version: '2'
services:
api:
build: ../images/api
volumes:
- .:/usr/src/app
ports:
- "7000:7000"
links: ["db"]
db:
build: ../images/db
ports:
- "27017:27017"
The problem is that, when I run docker-compose up --build
I have this error:
---> 327987c38250
Removing intermediate container f7b46029825f
Step 7/7 : CMD nodemon app.js
---> Running in d8430d03bcd2
---> ee5de77d78eb
Removing intermediate container d8430d03bcd2
Successfully built ee5de77d78eb
Recreating topology_db_1
Recreating topology_api_1
Attaching to topology_db_1, topology_api_1
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=5b93871d0f4f
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] db version v3.2.21
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] git version: 1ab1010737145ba3761318508ff65ba74dfe8155
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1t 3 May 2016
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] allocator: tcmalloc
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] modules: none
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] build environment:
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] distmod: debian81
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] distarch: x86_64
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] target_arch: x86_64
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] options: {}
db_1 | 2018-09-22T10:08:41.686+0000 I - [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
db_1 | 2018-09-22T10:08:41.687+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
db_1 | 2018-09-22T10:08:41.905+0000 I STORAGE [initandlisten] WiredTiger [1537610921:904991][1:0x7fdf57debcc0], txn-recover: Main recovery loop: starting at 89/4096
db_1 | 2018-09-22T10:08:41.952+0000 I STORAGE [initandlisten] WiredTiger [1537610921:952261][1:0x7fdf57debcc0], txn-recover: Recovering log 89 through 90
db_1 | 2018-09-22T10:08:41.957+0000 I STORAGE [initandlisten] WiredTiger [1537610921:957000][1:0x7fdf57debcc0], txn-recover: Recovering log 90 through 90
db_1 | 2018-09-22T10:08:42.148+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
db_1 | 2018-09-22T10:08:42.148+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
db_1 | 2018-09-22T10:08:42.148+0000 I NETWORK [initandlisten] waiting for connections on port 27017
api_1 | Usage: nodemon [nodemon options] [script.js] [args]
api_1 |
api_1 | See "nodemon --help" for more.
api_1 |
topology_api_1 exited with code 0
If I comment:
volumes:
-.:/usr/src/app
It compiles and run correctly.
Can someone help find what is what in my approach ?
Thanks
docker-compose.yml and api dockerFile are in different directory?“volume” instruction in compose file overwrite the ”copy” instruction。they have different context。