Who can help to deal with Docker Static Analysis With Clair?
I get an error when analyzing help me figure it out or tell me how to install the Docker Clair scanner correctly?
Getting Setup
git clone git#github.com:Charlie-belmer/Docker-security-example.git
docker-compose.yml
version: '2.1'
services:
postgres:
image: postgres:12.1
restart: unless-stopped
volumes:
- ./docker-compose-data/postgres-data/:/var/lib/postgresql/data:rw
environment:
- POSTGRES_PASSWORD=ChangeMe
- POSTGRES_USER=clair
- POSTGRES_DB=clair
clair:
image: quay.io/coreos/clair:v4.3.4
restart: unless-stopped
volumes:
- ./docker-compose-data/clair-config/:/config/:ro
- ./docker-compose-data/clair-tmp/:/tmp/:rw
depends_on:
postgres:
condition: service_started
command: [--log-level=debug, --config, /config/config.yml]
user: root
clairctl:
image: jgsqware/clairctl:latest
restart: unless-stopped
environment:
- DOCKER_API_VERSION=1.41
volumes:
- ./docker-compose-data/clairctl-reports/:/reports/:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
clair:
condition: service_started
user: root
docker-compose up
The server starts without errors but gets stuck on the same message
I don't understand what he doesn't like
test#parallels-virtual-platform:~/Docker-security-example/clair$ docker-compose up
clair_postgres_1 is up-to-date
Recreating clair_clair_1 ... done
Recreating clair_clairctl_1 ... done
Attaching to clair_postgres_1, clair_clair_1, clair_clairctl_1
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 22:55:36.851 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 22:55:36.853 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 22:55:36.877 UTC [24] LOG: database system was shut down at 2021-11-16 22:54:58 UTC
postgres_1 | 2021-11-16 22:55:36.888 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:01:15.219 UTC [1] LOG: received smart shutdown request
postgres_1 | 2021-11-16 23:01:15.225 UTC [1] LOG: background worker "logical replication launcher" (PID 30) exited with exit code 1
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2021-11-16 23:02:11.993 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2021-11-16 23:02:11.994 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2021-11-16 23:02:11.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2021-11-16 23:02:12.009 UTC [26] LOG: database system was interrupted; last known up at 2021-11-16 23:00:37 UTC
postgres_1 | 2021-11-16 23:02:12.164 UTC [26] LOG: database system was not properly shut down; automatic recovery in progress
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo starts at 0/1745C50
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: invalid record length at 0/1745D38: wanted 24, got 0
postgres_1 | 2021-11-16 23:02:12.166 UTC [26] LOG: redo done at 0/1745D00
postgres_1 | 2021-11-16 23:02:12.180 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] ERROR: duplicate key value violates unique constraint "lock_name_key"
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] DETAIL: Key (name)=(updater) already exists.
postgres_1 | 2021-11-16 23:02:12.471 UTC [33] STATEMENT: INSERT INTO Lock(name, owner, until) VALUES($1, $2, $3)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
clair_clair_1 exited with code 2
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
installing a bad container
docker pull imiell/bad-dockerfile
docker-compose exec clairctl clairctl analyze -l imiell/bad-dockerfile
client quit unexpectedly
2021-11-16 23:05:19.221606 C | cmd: pushing image "imiell/bad-dockerfile:latest": pushing layer to clair: Post http://clair:6060/v1/layers: dial tcp: lookup clair: Try again
I don't understand what he doesn't like for analysis?
I just solved this yesterday, the 4.3.4 version of Clair only supports two command-line options, mode, and conf. Your output bears this out:
clair_1 | flag provided but not defined: -log-level
clair_1 | Usage of /bin/clair:
clair_1 | -conf value
clair_1 | The file system path to Clair's config file.
clair_1 | -mode value
clair_1 | The operation mode for this server. (default combo)
Change the command line to only specify your configuration file (line 23 of your docker-compose.yml) and place your debug directive in the configuration file.
command: [--conf, /config/config.yml]
This should get Clair running.
I think your are using the old clairctl with the new Clair v4. You should be using clairctl from here: https://github.com/quay/clair/releases/tag/v4.3.5.
The console logs /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/tables during the build of the Docker container (see full log below). What could be the case for this considering I have the following code:
File overview
users.sql
BEGIN TRANSACTION;
CREATE TABLE users (
id serial PRIMARY KEY,
name VARCHAR(100),
email text UNIQUE NOT NULL,
entries BEGINT DEFAULT 0,
joined TIMESTAMP NOT NULL
);
COMMIT;
deploy_schemas.sql
-- Deploy fresh database tables
\i '/docker-entrypoint-initdb.d/tables/users.sql'
\i '/docker-entrypoint-initdb.d/tables/login.sql'
Dockerfile (in postgres folder)
FROM postgres:12.2
ADD /tables/ /docker-entrypoint-initdb.d/tables/
ADD deploy_schemas.sql /docker-entrypoint-initdb.d/tables/
**docker-compose.yml**
version: "3.3"
services:
# Backend API
smart-brain-app:
container_name: backend
# image: mode:14.2.0
build: ./
command: npm start
working_dir: /usr/src/smart-brain-api
environment:
POSTGRES_URI: postgres://postgres:1212#postgres:5431/smart-brain-api-db
links:
- postgres
ports:
- "3000:3000"
volumes:
- ./:/usr/src/smart-brain-api
# Postgres
postgres:
build: ./postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 1212
POSTGRES_DB: smart-brain-api-db
POSTGRES_HOST: postgres
ports:
- "5431:5432"
Dockerfile
FROM node:14.2.0
WORKDIR /usr/src/smart-brain-api
COPY ./ ./
RUN npm install | npm audit fix
CMD ["/bin/bash"]
Complete Log
Creating smart-brain-api_postgres_1 ... done
Creating backend ... done
Attaching to smart-brain-api_postgres_1, backend
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set
to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
postgres_1 |
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
postgres_1 | selecting dynamic shared memory implementation ... posix
postgres_1 | selecting default max_connections ... 100
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting default time zone ... Etc/UTC
postgres_1 | creating configuration files ... ok
postgres_1 | running bootstrap script ... ok
backend |
backend | > node#1.0.0 start /usr/src/smart-brain-api
backend | > npx nodemon server.js
backend |
postgres_1 | performing post-bootstrap initialization ... ok
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 | initdb: warning: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile
start
postgres_1 |
postgres_1 | waiting for server to start....2020-05-10 01:31:31.548 UTC [46] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1)
on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-10 01:31:31.549 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-05-10 01:31:31.565 UTC [47] LOG: database system was shut down at 2020-05-10 01:31:31 UTC
postgres_1 | 2020-05-10 01:31:31.569 UTC [46] LOG: database system is ready to accept connections
postgres_1 | done
postgres_1 | server started
postgres_1 | CREATE DATABASE
postgres_1 |
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/tables
postgres_1 |
postgres_1 | 2020-05-10 01:31:31.772 UTC [46] LOG: received fast shutdown request
postgres_1 | waiting for server to shut down....2020-05-10 01:31:31.774 UTC [46] LOG: aborting any active transactions
postgres_1 | 2020-05-10 01:31:31.775 UTC [46] LOG: background
worker "logical replication launcher" (PID 53) exited with exit code 1
postgres_1 | 2020-05-10 01:31:31.778 UTC [48] LOG: shutting down
postgres_1 | 2020-05-10 01:31:31.791 UTC [46] LOG: database system is shut down
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start
up.
postgres_1 |
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-05-10 01:31:31.884 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-05-10 01:31:31.894 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-05-10 01:31:31.910 UTC [64] LOG: database system was shut down at 2020-05-10 01:31:31 UTC
postgres_1 | 2020-05-10 01:31:31.914 UTC [1] LOG: database system is ready to accept connections
Hi I am deploying an etcd container in a k8s pod with data_dir mapped to persistent volume claim created for the pod. The first time the pvc and pod is created, etcd service is up and running and everything is working as expected.
Once I delete the k8s deployment and create it again, it does recognize during bootstrap to restart existing member (may be due to non-empty data_dir) but fails to start etcd service with unexpected fault address error.
We are currently using single node etcd cluster configuration which will suffice our need of having both the service and etcd db in a single pod. One more thing, the intention of using persistentvolume is to ensure that there is no data loss between pod restarts.
etcd Version: 3.3.11
PVC.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-etcd-db-pvc
annotations:
volume.beta.kubernetes.io/storage-class: glusterfs-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-etcd-db-service
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: sample-etcd-db-service
version: 0.1.0-rc.29
template:
metadata:
labels:
app: sample-etcd-db-service
version: 0.1.0-rc.29
spec:
containers:
- name: sample-etcd-db
image: quay.io/coreos/etcd:v3.3.11
imagePullPolicy: IfNotPresent
command:
- etcd
- --name=sample-etcd-db-new
- --listen-client-urls=http://0.0.0.0:2379
- --advertise-client-urls=http://0.0.0.0:2379
- --data-dir=/var/etcd/data
volumeMounts:
- mountPath: /var/etcd/data
name: sample-etcd-db-pvc
ports:
- containerPort: 2379
volumes:
- name: sample-etcd-db-pvc
persistentVolumeClaim:
claimName: sample-etcd-db-pvc
Creating PVC and Deployment for the first time in k8s cluster
[root#centos-vm etcd_bug]# kubectl create -f pvc.yml
[root#centos-vm etcd_bug]# kubectl create -f deployment.yml
--- Trying to restart etcd by deleting and creating k8s deployment again ---
[root#centos-vm etcd_bug]# kubectl delete deployment sample-etcd-db-service
[root#centos-vm etcd_bug]# kubectl create -f deployment.yml
Attaching logs from the pod.
[root#centos-vm etcd_bug]# kubectl logs sample-etcd-db-service-98f4f9459-4s27f -c sample-etcd-db
2019-02-19 20:25:22.344488 I | etcdmain: etcd Version: 3.3.11
2019-02-19 20:25:22.344636 I | etcdmain: Git SHA: 2cf9e51
2019-02-19 20:25:22.344645 I | etcdmain: Go Version: go1.10.7
2019-02-19 20:25:22.344651 I | etcdmain: Go OS/Arch: linux/amd64
2019-02-19 20:25:22.344659 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
2019-02-19 20:25:22.348118 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-02-19 20:25:22.348516 I | embed: listening for peers on http://0.0.0.0:2380
2019-02-19 20:25:22.349140 I | embed: listening for client requests on 0.0.0.0:2379
2019-02-19 20:25:22.391321 I | etcdserver: name = sample-etcd-db-new
2019-02-19 20:25:22.391362 I | etcdserver: data dir = /var/etcd/data
2019-02-19 20:25:22.391379 I | etcdserver: member dir = /var/etcd/data/member
2019-02-19 20:25:22.391387 I | etcdserver: heartbeat = 100ms
2019-02-19 20:25:22.391394 I | etcdserver: election = 1000ms
2019-02-19 20:25:22.391401 I | etcdserver: snapshot count = 100000
2019-02-19 20:25:22.391423 I | etcdserver: advertise client URLs = http://0.0.0.0:2379
2019-02-19 20:25:22.407858 I | etcdserver: restarting member 1c70f9bbb41018f in cluster a0d2de0531db7884 at commit index 4
2019-02-19 20:25:22.407995 I | raft: 1c70f9bbb41018f became follower at term 2
2019-02-19 20:25:22.408039 I | raft: newRaft 1c70f9bbb41018f [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]
unexpected fault address 0x7f43819ee000
fatal error: fault
[signal SIGBUS: bus error code=0x2 addr=0x7f43819ee000 pc=0x8808fd]
goroutine 1 [running]:
runtime.throw(0xfc556e, 0x5)
/usr/local/go/src/runtime/panic.go:616 +0x81 fp=0xc420222ed0 sp=0xc420222eb0 pc=0x42ade1
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:385 +0x273 fp=0xc420222f20 sp=0xc420222ed0 pc=0x4405b3
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*DB).page(...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/db.go:859
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Tx).page(...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/tx.go:599
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Bucket).pageNode(0xc4202b00f8, 0x2, 0x18, 0xc4201b78e0)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/bucket.go:724 +0xad fp=0xc420222f98 sp=0xc420222f20 pc=0x8808fd
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Cursor).search(0xc420223138, 0x16062d0, 0x7, 0x7, 0x2)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/cursor.go:254 +0x50 fp=0xc420223050 sp=0xc420222f98 pc=0x881d90
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Cursor).seek(0xc420223138, 0x16062d0, 0x7, 0x7, 0x0, 0x0, 0x4, 0xc4201bc8c0, 0x1, 0x1, ...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/cursor.go:159 +0xa5 fp=0xc4202230a0 sp=0xc420223050 pc=0x881695
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Bucket).CreateBucket(0xc4202b00f8, 0x16062d0, 0x7, 0x7, 0x0, 0x0, 0x0)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/bucket.go:165 +0xfa fp=0xc4202231a0 sp=0xc4202230a0 pc=0x87dc6a
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Tx).CreateBucket(0xc4202b00e0, 0x16062d0, 0x7, 0x7, 0x2, 0x1c70f9bbb41018f, 0x4)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/tx.go:108 +0x4f fp=0xc4202231e8 sp=0xc4202231a0 pc=0x88cbcf
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTx).UnsafeCreateBucket(0xc4201be6f0, 0x16062d0, 0x7, 0x7)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:48 +0x6b fp=0xc420223280 sp=0xc4202231e8 pc=0x8de51b
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership.mustCreateBackendBuckets(0x10b3700, 0xc4201bb9d0)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership/store.go:166 +0xb7 fp=0xc4202232c0 sp=0xc420223280 pc=0x957827
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership.(*RaftCluster).SetBackend(0xc4201ce720, 0x10b3700, 0xc4201bb9d0)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership/cluster.go:203 +0x54 fp=0xc4202232e0 sp=0xc4202232c0 pc=0x9525f4
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.NewServer(0x7ffee163af6a, 0x12, 0x0, 0x0, 0x0, 0x0, 0xc4201a1200, 0x1, 0x1, 0xc4201a1100, ...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/server.go:396 +0x921 fp=0xc420223ab0 sp=0xc4202232e0 pc=0xb762e1
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed.StartEtcd(0xc420294000, 0xc420294480, 0x0, 0x0)
.....
[root#centos-vm etcd_bug]# kubectl logs sample-etcd-db-service-98f4f9459-4s27f -c sample-etcd-db
2019-02-19 20:36:20.383231 I | etcdmain: etcd Version: 3.3.11
2019-02-19 20:36:20.383404 I | etcdmain: Git SHA: 2cf9e51
2019-02-19 20:36:20.383413 I | etcdmain: Go Version: go1.10.7
2019-02-19 20:36:20.383419 I | etcdmain: Go OS/Arch: linux/amd64
2019-02-19 20:36:20.383434 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
2019-02-19 20:36:20.386048 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-02-19 20:36:20.386987 I | embed: listening for peers on http://localhost:2380
2019-02-19 20:36:20.388330 I | embed: listening for client requests on 0.0.0.0:2379
2019-02-19 20:36:20.437097 I | etcdserver: name = sample-etcd-db-new
2019-02-19 20:36:20.437177 I | etcdserver: data dir = /var/etcd/data
2019-02-19 20:36:20.437198 I | etcdserver: member dir = /var/etcd/data/member
2019-02-19 20:36:20.437211 I | etcdserver: heartbeat = 100ms
2019-02-19 20:36:20.437222 I | etcdserver: election = 1000ms
2019-02-19 20:36:20.437233 I | etcdserver: snapshot count = 100000
2019-02-19 20:36:20.437284 I | etcdserver: advertise client URLs = http://0.0.0.0:2379
2019-02-19 20:36:20.456385 I | etcdserver: restarting member 1c70f9bbb41018f in cluster a0d2de0531db7884 at commit index 4
2019-02-19 20:36:20.456489 I | raft: 1c70f9bbb41018f became follower at term 2
2019-02-19 20:36:20.456520 I | raft: newRaft 1c70f9bbb41018f [peers: [], term: 2, commit: 4, applied: 0, lastindex: 4, lastterm: 2]
unexpected fault address 0x7efabfc3e000
fatal error: fault
[signal SIGBUS: bus error code=0x2 addr=0x7efabfc3e000 pc=0x8808fd]
goroutine 1 [running]:
runtime.throw(0xfc556e, 0x5)
/usr/local/go/src/runtime/panic.go:616 +0x81 fp=0xc420258ed0 sp=0xc420258eb0 pc=0x42ade1
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:385 +0x273 fp=0xc420258f20 sp=0xc420258ed0 pc=0x4405b3
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*DB).page(...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/db.go:859
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Tx).page(...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/tx.go:599
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Bucket).pageNode(0xc4203220f8, 0x2, 0x18, 0xc420227580)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/bucket.go:724 +0xad fp=0xc420258f98 sp=0xc420258f20 pc=0x8808fd
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Cursor).search(0xc420259138, 0x16062d0, 0x7, 0x7, 0x2)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/cursor.go:254 +0x50 fp=0xc420259050 sp=0xc420258f98 pc=0x881d90
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Cursor).seek(0xc420259138, 0x16062d0, 0x7, 0x7, 0x0, 0x0, 0x4, 0xc4202287f0, 0x1, 0x1, ...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/cursor.go:159 +0xa5 fp=0xc4202590a0 sp=0xc420259050 pc=0x881695
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Bucket).CreateBucket(0xc4203220f8, 0x16062d0, 0x7, 0x7, 0x0, 0x0, 0x0)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/bucket.go:165 +0xfa fp=0xc4202591a0 sp=0xc4202590a0 pc=0x87dc6a
github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt.(*Tx).CreateBucket(0xc4203220e0, 0x16062d0, 0x7, 0x7, 0x2, 0x1c70f9bbb41018f, 0x4)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/bbolt/tx.go:108 +0x4f fp=0xc4202591e8 sp=0xc4202591a0 pc=0x88cbcf
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTx).UnsafeCreateBucket(0xc420240b40, 0x16062d0, 0x7, 0x7)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:48 +0x6b fp=0xc420259280 sp=0xc4202591e8 pc=0x8de51b
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership.mustCreateBackendBuckets(0x10b3700, 0xc4202c8620)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership/store.go:166 +0xb7 fp=0xc4202592c0 sp=0xc420259280 pc=0x957827
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership.(*RaftCluster).SetBackend(0xc42022a960, 0x10b3700, 0xc4202c8620)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/membership/cluster.go:203 +0x54 fp=0xc4202592e0 sp=0xc4202592c0 pc=0x9525f4
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.NewServer(0x7fff206f1fa8, 0x12, 0x0, 0x0, 0x0, 0x0, 0xc420280a00, 0x1, 0x1, 0xc420280700, ...)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver/server.go:396 +0x921 fp=0xc420259ab0 sp=0xc4202592e0 pc=0xb762e1
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed.StartEtcd(0xc4202da000, 0xc4202da480, 0x0, 0x0)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/embed/etcd.go:179 +0x811 fp=0xc42025a6e8 sp=0xc420259ab0 pc=0xcb6361
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.startEtcd(0xc4202da000, 0xfc6677, 0x6, 0xc42025ad01, 0x2)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/etcd.go:181 +0x40 fp=0xc42025a7b0 sp=0xc42025a6e8 pc=0xd263c0
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.startEtcdOrProxyV2()
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/etcd.go:102 +0x1369 fp=0xc42025bf08 sp=0xc42025a7b0 pc=0xd25d79
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain.Main()
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdmain/main.go:46 +0x3f fp=0xc42025bf78 sp=0xc42025bf08 pc=0xd2c81f
main.main()
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/etcd/main.go:28 +0x20 fp=0xc42025bf88 sp=0xc42025bf78 pc=0xd2e910
runtime.main()
/usr/local/go/src/runtime/proc.go:198 +0x212 fp=0xc42025bfe0 sp=0xc42025bf88 pc=0x42c652
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc42025bfe8 sp=0xc42025bfe0 pc=0x459f81
goroutine 51 [chan receive]:
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc4201c3720)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0x40d
created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0x85
goroutine 104 [chan receive]:
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc420262680)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0x40d
created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0x85
goroutine 72 [chan receive]:
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc420224860)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0x40d
created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0x85
goroutine 116 [syscall]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 91 [select]:
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/wal.(*filePipeline).run(0xc42023fb00)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/wal/file_pipeline.go:89 +0x139
created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/wal.newFilePipeline
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/wal/file_pipeline.go:47 +0x11a
goroutine 90 [select]:
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.(*backend).run(0xc4202c8620)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:267 +0x180
created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend.newBackend
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:161 +0x2ea
goroutine 92 [select]:
github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.(*node).run(0xc42022a9c0, 0xc4202d4100)
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/node.go:313 +0x5f8
created by github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft.RestartNode
/tmp/etcd-release-3.3.11/etcd/release/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/raft/node.go:223 +0x321
Using the following docker-compose.yml file
version: '2'
services:
wordpress:
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_NAME: my_db
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: password
volumes:
- ./src:/var/www/html
mysql:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- ./data_dir:/var/lib/mysql
when running docker-compose up commande,its giving me following error
Starting wp_mysql_1
Starting wp_wordpress_1
Attaching to wp_mysql_1, wp_wordpress_1
wordpress_1 |
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 19
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
mysql_1 | 2016-11-28 15:47:02 139858949081024 [Note] mysqld (mysqld 10.1.19-MariaDB-1~jessie) starting as process 1
...
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: Using mutexes to ref count buffer pool pages
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: The InnoDB memory heap is disabled
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory
barrier
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: Compressed tables use zlib 1.2.8
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: Using Linux native AIO
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: Using SSE crc32 instructions
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: Initializing buffer pool, size = 256.0M
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] InnoDB: Completed initialization of buffer pool
mysql_1 | 2016-11-28 15:47:03 139858949081024 [ERROR] InnoDB: auto-extending data file ./ibdata1 is of a different
size 0 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pages
!
mysql_1 | 2016-11-28 15:47:03 139858949081024 [ERROR] InnoDB: Could not open or create the system tablespace. If yo
u tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in
my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote th
ose files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain
your precious data!
mysql_1 | 2016-11-28 15:47:03 139858949081024 [ERROR] Plugin 'InnoDB' init function returned error.
mysql_1 | 2016-11-28 15:47:03 139858949081024 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
mysql_1 | 2016-11-28 15:47:03 139858949081024 [Note] Plugin 'FEEDBACK' is disabled.
mysql_1 | 2016-11-28 15:47:03 139858949081024 [ERROR] Could not open mysql.plugin table. Some plugins may be not lo
aded
mysql_1 | 2016-11-28 15:47:03 139858949081024 [ERROR] Unknown/unsupported storage engine: InnoDB
mysql_1 | 2016-11-28 15:47:03 139858949081024 [ERROR] Aborting
mysql_1 |
wp_mysql_1 exited with code 1
wordpress_1 |
wordpress_1 | Warning: mysqli::mysqli(): php_network_getaddresses: getaddrinfo failed: Name or service not known in - o
n line 19
wordpress_1 |
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: Name or service no
t known in - on line 19
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) php_network_getaddresses: getaddrinfo failed: Name or service not known
but if I removed the volumes from mysql image, then it works fine! How can I mount the volume as I need data to be persisted.
It's actually a problem with MariaDB. You cannot mount the folder for MariaDB to the host using Docker because it presents the shared files/folders permissions to the database container as root owned with writable only by root. The solution is to use docker-compose named volumes. As stated in docker documentation:
Docker has two options for containers to store files in the host machine, so that the files are persisted even after the container stops: volumes, and bind mounts. If you’re running Docker on Linux you can also use a tmpfs mount.
What you are trying to use is the bind-mount which does not works with MariaDB. So we can use docker volume for that.
When you create a volume, it is stored within a directory on the Docker host. When you mount the volume into a container, this directory is what is mounted into the container. This is similar to the way that bind mounts work, except that volumes are managed by Docker and are isolated from the core functionality of the host machine. Volumes are stored in a part of the host filesystem which is managed by Docker(/var/lib/docker/volumes/ on Linux). So change your docker-compose file as:-
version: '2'
services:
wordpress:
image: wordpress
ports:
- 8080:80
environment:
WORDPRESS_DB_NAME: my_db
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: password
volumes:
- ./src:/var/www/html
mysql:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:
i.e. use a named volume under mysql service and declare it in top level volumes key. This will tell docker-compose to create a Docker mananged volume and your MariaDB data is backed up/persisted at /var/lib/docker/volumes/_db_data/_data directory on host machine.
Also after running docker-compose up command if you do
docker volume ls
then you can see docker-compose created volume.
I'm having trouble configuring a Registry internal mirror. I'm always getting forbidden error.
When i access the URL https://registry-1.docker.io/v2/ directly i have the same error too:
{"errors":[{"code":"UNAUTHORIZED","message":"access to the requested resource is not authorized","detail":null}]}
config.yml:
version: 0.1
log:
fields:
service: registry
storage:
delete:
enabled: true
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
proxy:
remoteurl: https://registry-1.docker.io
Registry 2.0 error:
registry_1 | time="2015-11-11T21:14:07Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.4.3 instance.id=9d59c958-6764-4951-b438-c8280e5a6c62 version=v2.2.0
registry_1 | time="2015-11-11T21:14:07Z" level=info msg="redis not configured" go.version=go1.4.3 instance.id=9d59c958-6764-4951-b438-c8280e5a6c62 version=v2.2.0
registry_1 | time="2015-11-11T21:14:07Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.4.3 instance.id=9d59c958-6764-4951-b438-c8280e5a6c62 version=v2.2.0
registry_1 | time="2015-11-11T21:14:07Z" level=info msg="Starting cached object TTL expiration scheduler..." go.version=go1.4.3 instance.id=9d59c958-6764-4951-b438-c8280e5a6c62 version=v2.2.0
registry_1 | time="2015-11-11T21:14:07Z" level=info msg="Starting upload purge in 8m0s" go.version=go1.4.3 instance.id=9d59c958-6764-4951-b438-c8280e5a6c62 version=v2.2.0
**registry_1 | panic: Get https://registry-1.docker.io/v2/: Forbidden**
registry_1 |
registry_1 | goroutine 1 [running]:
registry_1 | github.com/docker/distribution/registry/handlers.NewApp(0x7f7cf75022d8, 0xc208138300, 0xc208118500, 0x7f7cf75022d8)
registry_1 | /go/src/github.com/docker/distribution/registry/handlers/app.go:246 +0x25dc
registry_1 | github.com/docker/distribution/registry.NewRegistry(0x7f7cf7502318, 0xc208138300, 0xc208118500, 0xc208118500, 0x0, 0x0)
registry_1 | /go/src/github.com/docker/distribution/registry/registry.go:94 +0x2d4
registry_1 | github.com/docker/distribution/registry.func·001(0x1299880, 0xc20802b520, 0x1, 0x1)
registry_1 | /go/src/github.com/docker/distribution/registry/registry.go:57 +0x2d1
registry_1 | github.com/spf13/cobra.(*Command).execute(0x1299880, 0xc20800a010, 0x1, 0x1, 0x0, 0x0)
registry_1 | /go/src/github.com/docker/distribution/Godeps/_workspace/src/github.com/spf13/cobra/command.go:495 +0x65c
registry_1 | github.com/spf13/cobra.(*Command).Execute(0x1299880, 0x0, 0x0)
registry_1 | /go/src/github.com/docker/distribution/Godeps/_workspace/src/github.com/spf13/cobra/command.go:560 +0x18d
registry_1 | main.main()
registry_1 | /go/src/github.com/docker/distribution/cmd/registry/main.go:22 +0x2a
registry_1 |
registry_1 | goroutine 9 [syscall]:
registry_1 | os/signal.loop()
registry_1 | /usr/src/go/src/os/signal/signal_unix.go:21 +0x1f
registry_1 | created by os/signal.init·1
registry_1 | /usr/src/go/src/os/signal/signal_unix.go:27 +0x35
registry_1 |
registry_1 | goroutine 11 [sleep]:
registry_1 | github.com/docker/distribution/registry/handlers.func·009()
registry_1 | /go/src/github.com/docker/distribution/registry/handlers/app.go:938 +0x203
registry_1 | created by github.com/docker/distribution/registry/handlers.startUploadPurger
registry_1 | /go/src/github.com/docker/distribution/registry/handlers/app.go:945 +0x942
registry_1 |
registry_1 | goroutine 12 [select]:
registry_1 | github.com/docker/distribution/notifications.(*Broadcaster).run(0xc2081385d0)
registry_1 | /go/src/github.com/docker/distribution/notifications/sinks.go:80 +0x604
registry_1 | created by github.com/docker/distribution/notifications.NewBroadcaster
registry_1 | /go/src/github.com/docker/distribution/notifications/sinks.go:39 +0xea
registry_1 |
registry_1 | goroutine 13 [select]:
registry_1 | github.com/docker/distribution/registry/proxy/scheduler.func·001()
registry_1 | /go/src/github.com/docker/distribution/registry/proxy/scheduler/scheduler.go:133 +0x2d1
registry_1 | created by github.com/docker/distribution/registry/proxy/scheduler.(*TTLExpirationScheduler).Start
registry_1 | /go/src/github.com/docker/distribution/registry/proxy/scheduler/scheduler.go:152 +0x39a
registry_1 |
registry_1 | goroutine 17 [syscall, locked to thread]:
registry_1 | runtime.goexit()
registry_1 | /usr/src/go/src/runtime/asm_amd64.s:2232 +0x1
registry_registry_1 exited with code 2
Any idea?
Thanks!
People,
I have discovered why it doesn't works:
The domain registry-1.docker.io was present in my /etc/hosts file as other IP address just for a reverse proxy.
The problem is that this reverse proxy is requesting authentication. I've used a traditional Proxy with HTTP_PROXY variable and worked.
:D