I am deploying a docker with ECS. I can run the docker ok on local, but when i use ECS, it always stops it and then restart a new one. I checked the docker log, there is no error either. I also checked docker stats, things looks ok. Below is what the ecs log shows:
2018-02-16T07:24:01Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: Cgroup resource set up for task complete
2018-02-16T07:24:01Z [INFO] Task engine [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: pulling container openimage-http concurrently
2018-02-16T07:24:01Z [INFO] Task engine [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: Recording timestamp for starting image pulltime: 2018-02-16 07:24:01.209507829 +0000 UTC m=+30.870498308
2018-02-16T07:27:29Z [INFO] Adding image name- 035804961478.dkr.ecr.us-west-2.amazonaws.com/adobesearch/openimage:classifierscpu to Image state- sha256:b0b930a705e7df39404de30c58da2e573a2deb519b212c0d0a7ab74cf9d85192
2018-02-16T07:27:29Z [INFO] Updating container reference openimage-http in Image State - sha256:b0b930a705e7df39404de30c58da2e573a2deb519b212c0d0a7ab74cf9d85192
2018-02-16T07:27:29Z [INFO] Saving state! module="statemanager"
2018-02-16T07:27:29Z [INFO] Saving state! module="statemanager"
2018-02-16T07:27:29Z [INFO] Task engine [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: Finished pulling container 035804961478.dkr.ecr.us-west-2.amazonaws.com/adobesearch/openimage:classifierscpu in 3m27.982214992s
2018-02-16T07:27:29Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: Cgroup resource set up for task complete
2018-02-16T07:27:29Z [INFO] Task engine [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: creating container: openimage-http
2018-02-16T07:27:29Z [INFO] Task engine [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: created container name mapping for task: openimage-http -> ecs-classifierscpu-http-TaskDefinition-1AYVW69U0MPZX-1-openimage-http-b2cbd59b9dd7f4f7c501
2018-02-16T07:27:29Z [INFO] Saving state! module="statemanager"
2018-02-16T07:27:29Z [INFO] Task engine [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: created docker container for task: openimage-http -> e9710d91b36113d319dc04a274fe0fdf2908e102c25767ab2fab14e822277049
2018-02-16T07:27:29Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: unable to create task state change event []: create task state change event api: status not recognized by ECS: CREATED
2018-02-16T07:27:29Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: Cgroup resource set up for task complete
2018-02-16T07:27:29Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: redundant container state change. openimage-http to CREATED, but already CREATED
2018-02-16T07:27:29Z [INFO] Task engine [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: starting container: openimage-http
2018-02-16T07:27:30Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: sending container change event [openimage-http]: arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d openimage-http -> RUNNING, Ports [{5000 8080 0.0.0.0 0}], Known Sent: NONE
2018-02-16T07:27:30Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: sent container change event [openimage-http]: arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d openimage-http -> RUNNING, Ports [{5000 8080 0.0.0.0 0}], Known Sent: NONE
2018-02-16T07:27:30Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: sending task change event [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d -> RUNNING, Known Sent: NONE, PullStartedAt: 2018-02-16 07:24:01.209507829 +0000 UTC m=+30.870498308, PullStoppedAt: 2018-02-16 07:27:29.191733309 +0000 UTC m=+238.852723707, ExecutionStoppedAt: 0001-01-01 00:00:00 +0000 UTC]
2018-02-16T07:27:30Z [INFO] TaskHandler: batching container event: arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d openimage-http -> RUNNING, Ports [{5000 8080 0.0.0.0 0}], Known Sent: NONE
2018-02-16T07:27:30Z [INFO] TaskHandler: Adding event: TaskChange: [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d -> RUNNING, Known Sent: NONE, PullStartedAt: 2018-02-16 07:24:01.209507829 +0000 UTC m=+30.870498308, PullStoppedAt: 2018-02-16 07:27:29.191733309 +0000 UTC m=+238.852723707, ExecutionStoppedAt: 0001-01-01 00:00:00 +0000 UTC, arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d openimage-http -> RUNNING, Ports [{5000 8080 0.0.0.0 0}], Known Sent: NONE] sent: false
2018-02-16T07:27:30Z [INFO] TaskHandler: Sending task change: TaskChange: [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d -> RUNNING, Known Sent: NONE, PullStartedAt: 2018-02-16 07:24:01.209507829 +0000 UTC m=+30.870498308, PullStoppedAt: 2018-02-16 07:27:29.191733309 +0000 UTC m=+238.852723707, ExecutionStoppedAt: 0001-01-01 00:00:00 +0000 UTC, arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d openimage-http -> RUNNING, Ports [{5000 8080 0.0.0.0 0}], Known Sent: NONE] sent: false
2018-02-16T07:27:30Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: sent task change event [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d -> RUNNING, Known Sent: NONE, PullStartedAt: 2018-02-16 07:24:01.209507829 +0000 UTC m=+30.870498308, PullStoppedAt: 2018-02-16 07:27:29.191733309 +0000 UTC m=+238.852723707, ExecutionStoppedAt: 0001-01-01 00:00:00 +0000 UTC]
2018-02-16T07:27:30Z [INFO] Managed task [arn:aws:ecs:us-west-2:035804961478:task/c13ba3f3-6ac8-49c5-a649-3d90e363ce4d]: redundant container state change. openimage-http to RUNNING, but already RUNNING
2018-02-16T07:27:30Z [WARN] Error publishing metrics: stats engine: no task metrics to report
I also deployed another simple docker test image on ecs, i ONLY changed the tag name (the tag of the other docker image in ECS) in the template, and it works perfectly. But the previous failed case, the docker works perfectly well on local as well in instance if i manually do docker run. I really don't know where else to look for the error.
Related
I am trying to run consul docker container in host network mode as suggested on docker hub. I am unable to access the UI at port 8500
My docker host IP address: 192.168.30.12
network interface which is used by host: ens192
Here is my docker run command:
docker run -d --net=host -v /home/docker/conf.json:/consul/config/config.json -v /home/docker/data/:/consul/data/ -e CONSUL_BIND_INTERFACE=ens192 -e CONSUL_CLIENT_INTERFACE=ens192 --name=consulserver1 -d consul agent -server -bootstrap-expect=1 -client 0.0.0.0 -bind=192.168.30.12
I also see following error in docker logs
==> Found address '192.168.30.12' for interface 'ens192', setting bind option...
==> Found address '192.168.30.12' for interface 'ens192', setting client option...
==> Starting Consul agent...
Version: '1.14.4'
Build Date: '2023-01-26 15:47:10 +0000 UTC'
Node ID: 'd8e91718-dcf3-70be-dd29-c558158959f0'
Node name: 'docker-try1'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: true)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, gRPC-TLS: 8503, DNS: 8600)
Cluster Addr: 192.168.30.12 (LAN: 8301, WAN: 8302)
Gossip Encryption: false
Auto-Encrypt-TLS: false
HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2
gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2
==> Log data will now stream in as it occurs:
2023-02-17T15:18:30.052Z [WARN] agent: BootstrapExpect is set to 1; this is the same as Bootstrap mode.
2023-02-17T15:18:30.052Z [WARN] agent: Node name "docker-try1" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2023-02-17T15:18:30.052Z [WARN] agent: bootstrap = true: do not enable unless necessary
2023-02-17T15:18:30.057Z [WARN] agent.auto_config: BootstrapExpect is set to 1; this is the same as Bootstrap mode.
2023-02-17T15:18:30.057Z [WARN] agent.auto_config: Node name "docker-try1" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2023-02-17T15:18:30.057Z [WARN] agent.auto_config: bootstrap = true: do not enable unless necessary
2023-02-17T15:18:30.061Z [INFO] agent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d8e91718-dcf3-70be-dd29-c558158959f0 Address:192.168.30.12:8300}]"
2023-02-17T15:18:30.061Z [INFO] agent.server.raft: entering follower state: follower="Node at 192.168.30.12:8300 [Follower]" leader-address= leader-id=
2023-02-17T15:18:30.062Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: docker-try1.dc1 192.168.30.12
2023-02-17T15:18:30.062Z [WARN] agent.server.serf.wan: serf: Failed to re-join any previously known node
2023-02-17T15:18:30.062Z [INFO] agent.server.serf.lan: serf: EventMemberJoin: docker-try1 192.168.30.12
2023-02-17T15:18:30.063Z [INFO] agent.router: Initializing LAN area manager
2023-02-17T15:18:30.063Z [WARN] agent.server.serf.lan: serf: Failed to re-join any previously known node
2023-02-17T15:18:30.063Z [INFO] agent.server: Adding LAN server: server="docker-try1 (Addr: tcp/192.168.30.12:8300) (DC: dc1)"
2023-02-17T15:18:30.063Z [INFO] agent.server.autopilot: reconciliation now disabled
2023-02-17T15:18:30.064Z [INFO] agent.server: Handled event for server in area: event=member-join server=docker-try1.dc1 area=wan
2023-02-17T15:18:30.064Z [INFO] agent.server.cert-manager: initialized server certificate management
2023-02-17T15:18:30.064Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=udp
2023-02-17T15:18:30.065Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=tcp
2023-02-17T15:18:30.065Z [INFO] agent: Starting server: address=[::]:8500 network=tcp protocol=http
2023-02-17T15:18:30.065Z [INFO] agent: Started gRPC listeners: port_name=grpc_tls address=[::]:8503 network=tcp
2023-02-17T15:18:30.065Z [INFO] agent: started state syncer
2023-02-17T15:18:30.065Z [INFO] agent: Consul agent running!
2023-02-17T15:18:37.152Z [WARN] agent.cache: handling error in Cache.Notify: cache-type=connect-ca-leaf error="No cluster leader" index=0
2023-02-17T15:18:37.152Z [ERROR] agent.server.cert-manager: failed to handle cache update event: error="leaf cert watch returned an error: No cluster leader"
2023-02-17T15:18:37.248Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No cluster leader"
2023-02-17T15:18:39.483Z [WARN] agent.server.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2023-02-17T15:18:39.483Z [INFO] agent.server.raft: entering candidate state: node="Node at 192.168.30.12:8300 [Candidate]" term=7
2023-02-17T15:18:39.486Z [INFO] agent.server.raft: election won: term=7 tally=1
2023-02-17T15:18:39.486Z [INFO] agent.server.raft: entering leader state: leader="Node at 192.168.30.12:8300 [Leader]"
2023-02-17T15:18:39.486Z [INFO] agent.server: cluster leadership acquired
2023-02-17T15:18:39.487Z [INFO] agent.server: New leader elected: payload=docker-try1
2023-02-17T15:18:39.493Z [INFO] agent.server.autopilot: reconciliation now enabled
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="federation state anti-entropy"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="federation state pruning"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="streaming peering resources"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="metrics for streaming peering resources"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="peering deferred deletion"
2023-02-17T15:18:39.493Z [INFO] connect.ca: initialized primary datacenter CA from existing CARoot with provider: provider=consul
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="intermediate cert renew watch"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="CA root pruning"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="CA root expiration metric"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="CA signing expiration metric"
2023-02-17T15:18:39.493Z [INFO] agent.leader: started routine: routine="virtual IP version check"
2023-02-17T15:18:39.493Z [INFO] agent.leader: stopping routine: routine="virtual IP version check"
2023-02-17T15:18:39.493Z [INFO] agent.leader: stopped routine: routine="virtual IP version check"
2023-02-17T15:18:40.065Z [ERROR] agent.server.autopilot: Failed to reconcile current state with the desired state
2023-02-17T15:18:41.061Z [INFO] agent: Synced node info
I think I figured it out.
There was a firewall blocking tcp ports. As soon as I opened all ports recommended in Consul documentation Consul Ports, it started working
I seem to be having trouble starting concourse on Portainer using the stacks section. I have attached all of the relevant files below but I feel like I am missing something. I know there might be a way to start this using the command line but I am looking for a simple solution that is just a compose file if possible. This way when I teach this to others later the process for setup is easier.
I have the following compose file that concourse provided:
version: '3'
services:
concourse-db:
image: postgres
restart: unless-stopped
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: concourse/concourse
restart: unless-stopped
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["8080:8080"]
expose:
- "8080"
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_EXTERNAL_URL: http://localhost:8080
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
# instead of relying on the default "detect"
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_CLIENT_SECRET: Y29uY291cnNlLXdlYgo=
CONCOURSE_TSA_CLIENT_SECRET: Y29uY291cnNlLXdvcmtlcgo=
CONCOURSE_X_FRAME_OPTIONS: allow
CONCOURSE_CONTENT_SECURITY_POLICY: "*"
CONCOURSE_CLUSTER_NAME: tutorial
CONCOURSE_WORKER_CONTAINERD_DNS_SERVER: "8.8.8.8"
CONCOURSE_WORKER_RUNTIME: "containerd"
I am getting errors on both the web and the database. Here are the outputs:
{"timestamp":"2022-03-23T14:57:06.947153851Z","level":"info","source":"worker","message":"worker.beacon-runner.beacon.signal.signalled","data":{"session":"4.1.6"}}
{"timestamp":"2022-03-23T14:57:06.947202860Z","level":"info","source":"worker","message":"worker.beacon-runner.logging-runner-exited","data":{"session":"12"}}
{"timestamp":"2022-03-23T14:57:06.947240334Z","level":"error","source":"quickstart","message":"quickstart.worker-runner.logging-runner-exited","data":{"error":"Exit trace for group:\ngarden exited with error: Exit trace for group:\ncontainerd-garden-backend exited with error: setup host network failed: error appending iptables rule: running [/sbin/iptables -t filter -A INPUT -i concourse0 -j REJECT --reject-with icmp-host-prohibited --wait]: exit status 1: iptables: No chain/target/match by that name.\n\ncontainerd exited with nil\n\ncontainer-sweeper exited with nil\nvolume-sweeper exited with nil\ndebug exited with nil\nbaggageclaim exited with nil\nhealthcheck exited with nil\nbeacon exited with nil\n","session":"2"}}
{"timestamp":"2022-03-23T14:57:06.947348599Z","level":"info","source":"web","message":"web.tsa-runner.logging-runner-exited","data":{"session":"2"}}
{"timestamp":"2022-03-23T14:57:06.947457476Z","level":"info","source":"atc","message":"atc.tracker.drain.start","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.947657430Z","level":"info","source":"atc","message":"atc.tracker.drain.waiting","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.947670921Z","level":"info","source":"atc","message":"atc.tracker.drain.done","data":{"session":"26.1"}}
{"timestamp":"2022-03-23T14:57:06.949573381Z","level":"info","source":"web","message":"web.atc-runner.logging-runner-exited","data":{"session":"1"}}
{"timestamp":"2022-03-23T14:57:06.950178927Z","level":"info","source":"quickstart","message":"quickstart.web-runner.logging-runner-exited","data":{"session":"1"}}
error: Exit trace for group:
worker exited with error: Exit trace for group:
garden exited with error: Exit trace for group:
containerd-garden-backend exited with error: setup host network failed: error appending iptables rule: running [/sbin/iptables -t filter -A INPUT -i concourse0 -j REJECT --reject-with icmp-host-prohibited --wait]: exit status 1: iptables: No chain/target/match by that name.
containerd exited with nil
container-sweeper exited with nil
volume-sweeper exited with nil
debug exited with nil
baggageclaim exited with nil
healthcheck exited with nil
beacon exited with nil
and the output for the db
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /database ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /database -l logfile start
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2022-03-23 14:35:37.062 UTC [50] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-23 14:35:37.144 UTC [50] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-23 14:35:37.539 UTC [51] LOG: database system was shut down at 2022-03-23 14:35:35 UTC
2022-03-23 14:35:37.617 UTC [50] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down...2022-03-23 14:35:40.541 UTC [50] LOG: received fast shutdown request
.2022-03-23 14:35:40.562 UTC [50] LOG: aborting any active transactions
2022-03-23 14:35:40.565 UTC [50] LOG: background worker "logical replication launcher" (PID 57) exited with exit code 1
2022-03-23 14:35:40.566 UTC [52] LOG: shutting down
2022-03-23 14:35:40.829 UTC [50] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2022-03-23 14:35:40.961 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-03-23 14:35:40.961 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-03-23 14:35:40.961 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-03-23 14:35:41.012 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-03-23 14:35:41.065 UTC [64] LOG: database system was shut down at 2022-03-23 14:35:40 UTC
2022-03-23 14:35:41.108 UTC [1] LOG: database system is ready to accept connections
2022-03-23 14:57:46.294 UTC [1] LOG: received fast shutdown request
2022-03-23 14:57:46.317 UTC [1] LOG: aborting any active transactions
2022-03-23 14:57:46.319 UTC [1] LOG: background worker "logical replication launcher" (PID 70) exited with exit code 1
2022-03-23 14:57:46.319 UTC [65] LOG: shutting down
I have created a docker file with Supervisor.
I have added 2 processes in the Supervisord properties file.
1st process for executing httpd or tomcat
2nd process will call sh file. The sh file contains echo and read command to accept user input and insert into property file.
Intention is to run 1st process in the background and 2nd process to wait for the user input.
While running the docker image, the 2nd process executing but not waiting for the input?
2021-02-09 16:46:32,901 CRIT Supervisor running as root (no user in config file)
2021-02-09 16:46:32,901 WARN No file matches via include "/etc/supervisord/*.conf"
2021-02-09 16:46:32,903 INFO supervisord started with pid 1
2021-02-09 16:46:33,908 INFO spawned: 'supervisor_stdout' with pid 10
2021-02-09 16:46:33,911 INFO spawned: 'UserInput' with pid 11
2021-02-09 16:46:33,914 DEBG 'UserInput' stdout output:
BMC_DATABASE_HOST:
2021-02-09 16:46:33,939 DEBG 'supervisor_stdout' stdout output:
READY
2021-02-09 16:46:33,940 DEBG supervisor_stdout: ACKNOWLEDGED -> READY
2021-02-09 16:46:34,942 INFO success: supervisor_stdout entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-02-09 16:46:34,942 INFO success: UserInput entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Background: I have a system behind a proxy/firewall. I can access docker to pull images, but do not have a username/password to access any other sites. Therefore my docker container of sonarqube is essentially offline.
Question: The docker container starts fine the first time, but fails to restart. This happens in two instances, either a manually installed plugin presents an error that it fails to download the update-center url, or it simply starts shutting down immediately as it starts. Both fail the application which closes the container. I do not seem to be able (or understand how to) modify the sonar.properties to get the update-center disabled and need guidance.
I have inquired on the github for the container without much help: https://github.com/SonarSource/docker-sonarqube/issues/76#issuecomment-364563967 The '-Dsonar.updatecenter.activate=false' option does not work when I try it.
Simply shutting down
2018.02.09 21:45:38 INFO ce[][o.s.p.ProcessEntryPoint] Starting ce
2018.02.09 21:45:38 INFO ce[][o.s.ce.app.CeServer] Compute Engine starting up...
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] no modules loaded
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.join.ParentJoinPlugin]
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2018.02.09 21:45:39 INFO ce[][o.e.p.PluginsService] loaded plugin org.elasticsearch.transport.Netty4Plugin]
2018.02.09 21:45:41 INFO ce[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2018.02.09 21:45:41 INFO ce[][o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://pgsonar:5432/sonar
2018.02.09 21:45:43 INFO ce[][o.s.s.p.ServerFileSystemImpl] SonarQube home: /opt/sonarqube
2018.02.09 21:45:43 INFO ce[][o.s.c.c.CePluginRepository] Load plugins
2018.02.09 21:45:45 INFO ce[][o.s.c.q.PurgeCeActivities] Delete the Compute Engine tasks created before Sun Aug 13 21:45:45 UTC 2017
2018.02.09 21:45:45 INFO ce[][o.s.ce.app.CeServer] Compute Engine is operational
2018.02.09 21:45:45 INFO app[][o.s.a.SchedulerImpl] Process[ce] is up
2018.02.09 21:45:45 INFO app[][o.s.a.SchedulerImpl] SonarQube is up
2018.02.09 21:47:12 INFO app[][o.s.a.SchedulerImpl] Stopping SonarQube
2018.02.09 21:47:13 INFO ce[][o.s.p.StopWatcher] Stopping process
2018.02.09 21:47:13 INFO ce[][o.s.ce.app.CeServer] Compute Engine is stopping...
2018.02.09 21:47:13 INFO ce[][o.s.c.t.CeProcessingSchedulerImpl] Waiting for workers to finish in-progress tasks
2018.02.09 21:47:14 INFO ce[][o.s.ce.app.CeServer] Compute Engine is stopped
2018.02.09 21:47:15 INFO app[][o.s.a.SchedulerImpl] Process [ce] is stopped
2018.02.09 21:47:15 INFO web[][o.s.p.StopWatcher] Stopping process
2018.02.09 21:47:18 INFO app[][o.s.a.SchedulerImpl] Process [web] is stopped
2018.02.09 21:47:18 INFO app[][o.s.a.SchedulerImpl] Process [es] is stopped
2018.02.09 21:47:18 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
2018.02.09 21:47:18 WARN app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143
chown: cannot access '/opt/sonarqube/temp/README.txt': No such file or directory
Will update with the fail to download later (no access to logs at this exact moment)
Regarding the README.txt issue, you have to create a volume and mount the temp folder (note that I use the postgres setup from anorak:girl). You can then start and stop with no problems.
sudo docker volume create sonarqube-temp
sudo docker run -d --name sonarqube --link sonar-postgres:pgsonar -p 9000:9000 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD='secure' -e SONARQUBE_JDBC_URL=jdbc:postgresql://pgsonar:5432/sonar -v sonarqube-temp:/opt/sonarqube/temp sonarqube:lts
Regarding the UpdateCenter issue, workaround is to specify a configuration with the run command (this is specific to Godin's docker container for sonarqube - through his run.sh script):
sudo docker run -d --name sonarqube --link sonar-postgres:pgsonar -p 9000:9000 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD='secure' -e SONARQUBE_JDBC_URL=jdbc:postgresql://pgsonar:5432/sonar -v sonarqube-temp:/opt/sonarqube/temp sonarqube:lts -Dsonar.updatecenter.activate=false
ECS service is exiting after running for a couple of seconds. In case we make a container out of the image manually it is running fine, command: sudo docker run -i -p 9100:9100 -p 9110:9110 -p 9120:9120 -p 9130:9130 847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap
ECS logs are as follows:
2017-01-23T19:37:00Z [INFO] Created docker container for task bytemark-cap:2 arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e, Status: (CREATED->RUNNING) Containers: [bytemark-cap-container (CREATED->RUNNING),]: bytemark-cap-container(847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap:latest) (CREATED->RUNNING) -> c8ee05c5cd688209939d96b54cb0c74c4122686036362d7bfa19f85bdc2dd56e
2017-01-23T19:37:00Z [INFO] Starting container module="TaskEngine" task="bytemark-cap:2 arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e, Status: (CREATED->RUNNING) Containers: [bytemark-cap-container (CREATED->RUNNING),]" container="bytemark-cap-container(847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap:latest) (CREATED->RUNNING)"
2017-01-23T19:37:01Z [INFO] Task change event module="TaskEngine" event="{TaskArn:arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e Status:RUNNING Reason: SentStatus:NONE}"
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> RUNNING, Ports [{9100 9100 0.0.0.0 0} {9110 9110 0.0.0.0 0} {9120 9120 0.0.0.0 0} {9130 9130 0.0.0.0 0}], Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> RUNNING, Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Sending container change module="eventhandler" event="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> RUNNING, Ports [{9100 9100 0.0.0.0 0} {9110 9110 0.0.0.0 0} {9120 9120 0.0.0.0 0} {9130 9130 0.0.0.0 0}], Known Sent: NONE" change="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> RUNNING, Ports [{9100 9100 0.0.0.0 0} {9110 9110 0.0.0.0 0} {9120 9120 0.0.0.0 0} {9130 9130 0.0.0.0 0}], Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Redundant container state change for task bytemark-cap:2 arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e, Status: (RUNNING->RUNNING) Containers: [bytemark-cap-container (RUNNING->RUNNING),]: bytemark-cap-container(847782638323.dkr.ecr.us-east-1.amazonaws.com/bytemark/cap:latest) (RUNNING->RUNNING) to RUNNING, but already RUNNING
2017-01-23T19:37:01Z [INFO] Sending task change module="eventhandler" event="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> RUNNING, Known Sent: NONE" change="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> RUNNING, Known Sent: NONE"
2017-01-23T19:37:01Z [INFO] Task change event module="TaskEngine" event="{TaskArn:arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e Status:STOPPED Reason: SentStatus:RUNNING}"
2017-01-23T19:37:01Z [INFO] Error retrieving stats for container c8ee05c5cd688209939d96b54cb0c74c4122686036362d7bfa19f85bdc2dd56e: context canceled
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="ContainerChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e bytemark-cap-container -> STOPPED, Exit 0, , Known Sent: RUNNING"
2017-01-23T19:37:01Z [INFO] Adding event module="eventhandler" change="TaskChange: arn:aws:ecs:us-east-1:847782638323:task/5d0c49a2-9591-469e-b05b-ebbaaefac26e -> STOPPED, Known Sent: RUNNING"
2017-01-23T19:37:01Z [INFO] Container c8ee05c5cd688209939d96b54cb0c74c4122686036362d7bfa19f85bdc2dd56e is terminal, stopping stats collection
Is there a way to debug the actual cause of the ECS service exiting.
Thank you.
If you can SSH into the docker instances (if you made it through ECS and specified a key or are using your own instances) you can get the logs from the docker.
I had an issue where my only docker container was dying after a few seconds so you can list the containers with
docker ps -a
Find the container you want to see the logs for and use
docker logs *container id*