Is there a simple cli to probe dns names/queries - docker

Say there is a FQND : www.ebay.com, How do I do a DNSSEC query to LIST ALL for a CNAME without installing any special modules/packages?
Ideally I would like an open-source code to do the same.
I need a VISUAL representation for the data + JSON data set

http://dnsviz.net/
DNSVIZ is an excellent website for this.
If you have docker installed:
docker run -it nrshrivatsan/dnsviz /bin/bash -c "dnsviz probe www.ebay.com"
Output would be a JSON, when formatted:
.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:55:58 UTC'
analysis_end: '2018-05-13 02:55:58 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
explicit_delegation: true
auth_ns_ip_mapping:
ns0.:
- 192.168.65.1
queries:
- qname: .
qclass: IN
qtype: NS
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
J9yBgAABAA0AAAAAAAACAAEAAAIAAQAAVTgAFAFjDHJvb3Qtc2VydmVycwNuZXQAAAACAAEAAFU4AAQBbcAeAAACAAEAAFU4AAQBZsAeAAACAAEAAFU4AAQBa8AeAAACAAEAAFU4AAQBYcAeAAACAAEAAFU4AAQBacAeAAACAAEAAFU4AAQBYsAeAAACAAEAAFU4AAQBZ8AeAAACAAEAAFU4AAQBZMAeAAACAAEAAFU4AAQBZcAeAAACAAEAAFU4AAQBaMAeAAACAAEAAFU4AAQBbMAeAAACAAEAAFU4AAQBasAe
msg_size: 228
time_elapsed: 1
history: []
- qname: .
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 512
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
pGiBgAABAAMAAAAAAAAwAAEAADAAAQAANV0BCAEBAwgDAQABqAAgqVVmukLohruATNqE5H71bb167GEmFVUs7JBtIRbQ7yBwKMUVVBRN/q/nx8uPAF3RgjQTOsBxCoEYLOH9FK0ig7yDQ1+d8vYxMlGTGhdt8NpR5U9C5gSGDfs1lYAlD1WcxUPE/9Ucvj3oz9BnGSN/n8R+5ynaBoNfpFLoJemhjrwuy89WNHRlLDPPVqkDO8312XMSF5fsgIkEG24DobctCnNbmE4DaHMJMyMk8nwtuoXp2xXoOgFDOC6XSwYhwY5iXs7JB1d9nnut6VJBqB676KkB1NMnbkCxFMCi5vw40ZwuaqsCZEsoE/V1/CFgHg3uSc2e6WpDED5STWKHPQAAMAABAAA1XQEIAQADCAMBAAHVOGipQ4BKV1lqR+WlxyWSNVZvQpuNphgLKpM92pBXLUus7GRwt6TTTLoPfXymuzlvrURyMGRHP/5l/Cbem3MOWz4ERxurtnk/L1KW1wz1bNSAhkehYZcBVhDssWyCIg33exKungC5OUTXGrshPv2T/lXa4VmQ3hFmUazwS4wcgVkx5fr/3/sb0yd0rXKtrRKp80tggjq+kUyXqaa2IbvuWJemkWZhFQStu1gltWa5NS3pCbyV33MikipNfQuTOSVl+fKcepwdSEtKTLOmgiCtsmkEFTVBIPV8kMwUr/6zMVTTct1QKjo74fjgEqSzJuBhAm0knvI7Y8BtCO/LHzGXAAAwAAEAADVdAQgBAQMIAwEAAaz/tAm8yTn4Mfeh5eyI96WSVexTBAvkMgJzkKTOiW1vkIbzxeF3+/4RgWOq7HrxRixHlFlExOLAJr5emLvN7SWXgnLh4+B5xQlNVz8Og8kvArMtNROxVQuCaSnIDdD5LKyWbRd2n9WGe2R8PzgCmr3EgVLrjyBxWezF0jLHwVN8efS3rCj/EWgvIWgb9tarpVUDK/b58Da+sqqls3eNbuv7pr+eoZG+SrDK6nWeL3c6H5Apxz7LjVc1uTIdsIXxuOLYA4/ilBmSVIzuDWfdRUfhHdY6+cn8HFRm+2hM8AnXGXws9555KrUB5qihylGa8subX2Nn6UwNR1AkUTV74bU=
msg_size: 842
time_elapsed: 1
history: []
- qname: .
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
pGiBgAABAAMAAAAAAAAwAAEAADAAAQAANV0BCAEBAwgDAQABrP+0CbzJOfgx96Hl7Ij3pZJV7FMEC+QyAnOQpM6JbW+QhvPF4Xf7/hGBY6rsevFGLEeUWUTE4sAmvl6Yu83tJZeCcuHj4HnFCU1XPw6DyS8Csy01E7FVC4JpKcgN0PksrJZtF3af1YZ7ZHw/OAKavcSBUuuPIHFZ7MXSMsfBU3x59LesKP8RaC8haBv21qulVQMr9vnwNr6yqqWzd41u6/umv56hkb5KsMrqdZ4vdzofkCnHPsuNVzW5Mh2whfG44tgDj+KUGZJUjO4NZ91FR+Ed1jr5yfwcVGb7aEzwCdcZfCz3nnkqtQHmqKHKUZryy5tfY2fpTA1HUCRRNXvhtQAAMAABAAA1XQEIAQADCAMBAAHVOGipQ4BKV1lqR+WlxyWSNVZvQpuNphgLKpM92pBXLUus7GRwt6TTTLoPfXymuzlvrURyMGRHP/5l/Cbem3MOWz4ERxurtnk/L1KW1wz1bNSAhkehYZcBVhDssWyCIg33exKungC5OUTXGrshPv2T/lXa4VmQ3hFmUazwS4wcgVkx5fr/3/sb0yd0rXKtrRKp80tggjq+kUyXqaa2IbvuWJemkWZhFQStu1gltWa5NS3pCbyV33MikipNfQuTOSVl+fKcepwdSEtKTLOmgiCtsmkEFTVBIPV8kMwUr/6zMVTTct1QKjo74fjgEqSzJuBhAm0knvI7Y8BtCO/LHzGXAAAwAAEAADVdAQgBAQMIAwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoXbfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaDX6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulqQxA+Uk1ihz0=
msg_size: 842
time_elapsed: 1
history: []
net.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:56:03 UTC'
analysis_end: '2018-05-13 02:56:03 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: .
explicit_delegation: true
auth_ns_ip_mapping:
ns0.:
- 192.168.65.1
queries:
- qname: net.
qclass: IN
qtype: NS
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
XimBgAABAA0AAAAAA25ldAAAAgABwAwAAgABAAA62gARAWEMZ3RsZC1zZXJ2ZXJzwAzADAACAAEAADraAAQBasAjwAwAAgABAAA62gAEAWvAI8AMAAIAAQAAOtoABAFtwCPADAACAAEAADraAAQBYsAjwAwAAgABAAA62gAEAWPAI8AMAAIAAQAAOtoABAFmwCPADAACAAEAADraAAQBZcAjwAwAAgABAAA62gAEAWnAI8AMAAIAAQAAOtoABAFkwCPADAACAAEAADraAAQBbMAjwAwAAgABAAA62gAEAWfAI8AMAAIAAQAAOtoABAFowCM=
msg_size: 242
time_elapsed: 1
history: []
- qname: net.
qclass: IN
qtype: DS
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
d3eBgAABAAEAAAAAA25ldAAAKwABwAwAKwABAADS8QAkjC4IAnhisn9fUW6+GWgERNTOXnYpgZMYQsRl8AI2QB2L2XPu
msg_size: 69
time_elapsed: 1
history: []
- qname: net.
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
ZLyBgAABAAIAAAAAA25ldAAAMAABwAwAMAABAADS8QEGAQEDCAEDmAZ86llyBI+ppYF4OC1gtBz9p/HivF6oB5Zif+vYriB7GtU6xwxuk3Dq2v6cnhisB9JymYU5l0rlBRofQQjYsFQ7ssm+jgRstjbXvSq/DcFoOKvs4UqDn+A/oAvXRM4NMhXcE9AyhL5jT7dCuI20A0uGIpm/rjyrhesonllqz2GAq7/AvSgvCTM3trk+jblgX7Jdnw1KFeJeuajMMNOg+5/qe0W3cFpAGdHwOt5l1VwWRepKQ7bGA+0lAfY4YXy0BP6/OW74NxWkrOVotgWqjdOksxliNQx7u2sbYSDnh8INNM8ZYU6aGAh9JXaEM/kuuc1mIv30SYsi0gW+iWTr58AMADAAAQAA0vEAhgEAAwgBA8qGy0UA+ljcDHd5JNUtTQTJDJreexylp+KEuXjiDDtwuYWgMAB4oZnZUV4lDlUW5kCmipNstugm4LH8eL/ey+7+4ii/ap0UoHTfq1OBuRDEuqjudcf34fsPdVb/NGxXL48LSLZwUPBlC3Y9L8OaITKgjROoDiXq0Ndkmeh5fR+Z
msg_size: 441
time_elapsed: 2
history: []
- qname: net.
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 512
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
ZLyBgAABAAIAAAAAA25ldAAAMAABwAwAMAABAADS8QCGAQADCAEDyobLRQD6WNwMd3kk1S1NBMkMmt57HKWn4oS5eOIMO3C5haAwAHihmdlRXiUOVRbmQKaKk2y26Cbgsfx4v97L7v7iKL9qnRSgdN+rU4G5EMS6qO51x/fh+w91Vv80bFcvjwtItnBQ8GULdj0vw5ohMqCNE6gOJerQ12SZ6Hl9H5nADAAwAAEAANLxAQYBAQMIAQOYBnzqWXIEj6mlgXg4LWC0HP2n8eK8XqgHlmJ/69iuIHsa1TrHDG6TcOra/pyeGKwH0nKZhTmXSuUFGh9BCNiwVDuyyb6OBGy2Nte9Kr8NwWg4q+zhSoOf4D+gC9dEzg0yFdwT0DKEvmNPt0K4jbQDS4Yimb+uPKuF6yieWWrPYYCrv8C9KC8JMze2uT6NuWBfsl2fDUoV4l65qMww06D7n+p7RbdwWkAZ0fA63mXVXBZF6kpDtsYD7SUB9jhhfLQE/r85bvg3FaSs5Wi2BaqN06SzGWI1DHu7axthIOeHwg00zxlhTpoYCH0ldoQz+S65zWYi/fRJiyLSBb6JZOvn
msg_size: 441
time_elapsed: 2
history: []
akamaiedge.net.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:56:03 UTC'
analysis_end: '2018-05-13 02:56:03 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: net.
explicit_delegation: true
queries:
- qname: akamaiedge.net.
qclass: IN
qtype: A
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: NdeBgwABAAAAAAAACmFrYW1haWVkZ2UDbmV0AAABAAE=
msg_size: 32
time_elapsed: 46
history: []
e9428.b.akamaiedge.net.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:56:03 UTC'
analysis_end: '2018-05-13 02:56:03 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: net.
nxdomain_ancestor: akamaiedge.net.
explicit_delegation: true
queries:
- qname: e9428.b.akamaiedge.net.
qclass: IN
qtype: A
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
Kg6BgAABAAEAAAAABWU5NDI4AWIKYWthbWFpZWRnZQNuZXQAAAEAAcAMAAEAAQAAAAsABBfRsWw=
msg_size: 56
time_elapsed: 1
history: []
- qname: e9428.b.akamaiedge.net.
qclass: IN
qtype: AAAA
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: Dn6BgwABAAAAAAAABWU5NDI4AWIKYWthbWFpZWRnZQNuZXQAABwAAQ==
msg_size: 40
time_elapsed: 3
history: []
edgekey.net.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:56:03 UTC'
analysis_end: '2018-05-13 02:56:03 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: net.
explicit_delegation: true
queries:
- qname: edgekey.net.
qclass: IN
qtype: A
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: A+iBgwABAAAAAAAAB2VkZ2VrZXkDbmV0AAABAAE=
msg_size: 29
time_elapsed: 87
history: []
slot9428.ebay.com.edgekey.net.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:55:58 UTC'
analysis_end: '2018-05-13 02:56:00 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: net.
nxdomain_ancestor: edgekey.net.
explicit_delegation: true
queries:
- qname: slot9428.ebay.com.edgekey.net.
qclass: IN
qtype: A
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
dCeBgAABAAIAAAAACHNsb3Q5NDI4BGViYXkDY29tB2VkZ2VrZXkDbmV0AAABAAHADAAFAAEAABf/ABUFZTk0MjgBYgpha2FtYWllZGdlwCbAOwABAAEAAAALAAQX0bFs
msg_size: 96
time_elapsed: 2
history: []
- qname: slot9428.ebay.com.edgekey.net.
qclass: IN
qtype: AAAA
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: xmCBgwABAAAAAAAACHNsb3Q5NDI4BGViYXkDY29tB2VkZ2VrZXkDbmV0AAAcAAE=
msg_size: 47
time_elapsed: 4
history: []
com.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:55:58 UTC'
analysis_end: '2018-05-13 02:55:58 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: .
explicit_delegation: true
auth_ns_ip_mapping:
ns0.:
- 192.168.65.1
queries:
- qname: com.
qclass: IN
qtype: NS
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
5mWBgAABAA0AAAAAA2NvbQAAAgABwAwAAgABAAA7ZAAUAWYMZ3RsZC1zZXJ2ZXJzA25ldADADAACAAEAADtkAAQBa8AjwAwAAgABAAA7ZAAEAWfAI8AMAAIAAQAAO2QABAFiwCPADAACAAEAADtkAAQBY8AjwAwAAgABAAA7ZAAEAWnAI8AMAAIAAQAAO2QABAFkwCPADAACAAEAADtkAAQBYcAjwAwAAgABAAA7ZAAEAWXAI8AMAAIAAQAAO2QABAFqwCPADAACAAEAADtkAAQBbMAjwAwAAgABAAA7ZAAEAW3AI8AMAAIAAQAAO2QABAFowCM=
msg_size: 245
time_elapsed: 1
history: []
- qname: com.
qclass: IN
qtype: DS
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
cC6BgAABAAEAAAAAA2NvbQAAKwABwAwAKwABAABmJQAkeL0IAuLTyRb23urHMpToJo+1iFBEqDP8VFlYj0qRhM/EGldm
msg_size: 69
time_elapsed: 1
history: []
- qname: com.
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
ok6BgAABAAIAAAAAA2NvbQAAMAABwAwAMAABAADS8QCGAQADCAEDs/ogasWbQigQairDv4vSVXIWdmdaRQlp4/qRRyYt7j1omFrADL7TKSJ0T31dTrfZvvJxz8UpLzi9vm7bY+IxCurlH02qxE+MADEodpPa19ofvZFUA43LkcHxV1eWwJhXd4+e4YnOxRGFUsV7TVFVOwT2hUR7ha9zq0B0F2keCHnADAAwAAEAANLxAQYBAQMIAQPDzldNmMvZFX4NcNJ0uEnKDg7tmv/F3MyQR0lpBmVcNcsIszxNFxsBfKNW9JYCYqpik8366LE7VbIcNRzfp2h9OO8HRl+H+E08zauK8k7evWEmu/6od+2boggPoiEfGNyvNPaSI7FOIroDsnw/taggzHRX1Z7SOiOiPWPNIwSUyWOZ79VmcQ1GLkC6NlYvG3HwYmynQv6oFwGv/KELSw7ZSdrbTQ0HXvZbqMUI7BaMskmvgm1G7oKZ1YiF7O9ioVNc0+7ASbqmZN7Z98EGU/Qh2K/BgUe8Hs0XVcdPKrtyYnoQHd2ynKPcMMlTEih2/2HDHjRPJ2aywIpKNnv4oPo/
msg_size: 441
time_elapsed: 1
history: []
- qname: com.
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 512
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
ok6BgAABAAIAAAAAA2NvbQAAMAABwAwAMAABAADS8QEGAQEDCAEDw85XTZjL2RV+DXDSdLhJyg4O7Zr/xdzMkEdJaQZlXDXLCLM8TRcbAXyjVvSWAmKqYpPN+uixO1WyHDUc36dofTjvB0Zfh/hNPM2rivJO3r1hJrv+qHftm6IID6IhHxjcrzT2kiOxTiK6A7J8P7WoIMx0V9We0jojoj1jzSMElMljme/VZnENRi5AujZWLxtx8GJsp0L+qBcBr/yhC0sO2Una200NB172W6jFCOwWjLJJr4JtRu6CmdWIhezvYqFTXNPuwEm6pmTe2ffBBlP0IdivwYFHvB7NF1XHTyq7cmJ6EB3dspyj3DDJUxIodv9hwx40TydmssCKSjZ7+KD6P8AMADAAAQAA0vEAhgEAAwgBA7P6IGrFm0IoEGoqw7+L0lVyFnZnWkUJaeP6kUcmLe49aJhawAy+0ykidE99XU632b7ycc/FKS84vb5u22PiMQrq5R9NqsRPjAAxKHaT2tfaH72RVAONy5HB8VdXlsCYV3ePnuGJzsURhVLFe01RVTsE9oVEe4Wvc6tAdBdpHgh5
msg_size: 441
time_elapsed: 1
history: []
ebay.com.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:55:58 UTC'
analysis_end: '2018-05-13 02:55:58 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: com.
explicit_delegation: true
auth_ns_ip_mapping:
ns0.:
- 192.168.65.1
queries:
- qname: ebay.com.
qclass: IN
qtype: NS
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
bAKBgAABAAcAAAAABGViYXkDY29tAAACAAHADAACAAEAAAktABQDbnMyA3A0NwZkeW5lY3QDbmV0AMAMAAIAAQAACS0AEQJhMwt2ZXJpc2lnbmRuc8ARwAwAAgABAAAJLQAGA25zNMAqwAwAAgABAAAJLQAGA25zM8AqwAwAAgABAAAJLQAGA25zMcAqwAwAAgABAAAJLQAFAmExwEnADAACAAEAAAktAAUCYTLASQ==
msg_size: 175
time_elapsed: 1
history: []
- qname: ebay.com.
qclass: IN
qtype: DS
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: Xs+BgwABAAAAAAAABGViYXkDY29tAAArAAE=
msg_size: 26
time_elapsed: 1
history: []
- qname: ebay.com.
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 512
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: Q0WBgwABAAAAAAAABGViYXkDY29tAAAwAAE=
msg_size: 26
time_elapsed: 1
history: []
- qname: ebay.com.
qclass: IN
qtype: DNSKEY
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: Q0WBgwABAAAAAAAABGViYXkDY29tAAAwAAE=
msg_size: 26
time_elapsed: 1
history: []
www.ebay.com.:
type: recursive
stub: false
analysis_start: '2018-05-13 02:55:56 UTC'
analysis_end: '2018-05-13 02:55:58 UTC'
clients_ipv4:
- 172.17.0.2
clients_ipv6: []
parent: ebay.com.
nxdomain_ancestor: ebay.com.
explicit_delegation: true
queries:
- qname: www.ebay.com.
qclass: IN
qtype: A
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: >-
xRKBgAABAAMAAAAAA3d3dwRlYmF5A2NvbQAAAQABwAwABQABAAAATwAfCHNsb3Q5NDI4BGViYXkDY29tB2VkZ2VrZXkDbmV0AMAqAAUAAQAAF/8AFQVlOTQyOAFiCmFrYW1haWVkZ2XARMBVAAEAAQAAAAsABBfRsWw=
msg_size: 122
time_elapsed: 2
history: []
- qname: www.ebay.com.
qclass: IN
qtype: AAAA
options:
flags: 256
edns_version: 0
edns_max_udp_payload: 4096
edns_flags: 32768
edns_options: []
tcp: false
responses:
192.168.65.1:
172.17.0.2:
message: +SCBgwABAAAAAAAAA3d3dwRlYmF5A2NvbQAAHAAB
msg_size: 30
time_elapsed: 6
history: []
_meta._dnsviz.:
version: 1.1
names:
- www.ebay.com.
Source
https://github.com/dnsviz/dnsviz/pull/36

Related

grep a command output, but how to look for specific block?

I'm looking for some specific block with grep
for example I have this output from android device:
Stream volumes (device: index)
- STREAM_VOICE_CALL:
Muted: false
Min: 1
Max: 5
Current: 40000000 (default): 4
Devices: earpiece
- STREAM_SYSTEM:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
- STREAM_RING:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
**- STREAM_MUSIC:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker**
- STREAM_ALARM:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 6
Devices: speaker
- STREAM_NOTIFICATION:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
- STREAM_BLUETOOTH_SCO:
Muted: false
Min: 0
Max: 15
Current: 40000000 (default): 7
Devices: earpiece
- STREAM_SYSTEM_ENFORCED:
Muted: false
Min: 0
Max: 7
Current: 40000000 (default): 5
Devices: speaker
- STREAM_DTMF:
Muted: false
Min: 0
Max: 15
Current: 40000000 (default): 11
Devices: speaker
- STREAM_TTS:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker
- STREAM_ACCESSIBILITY:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker
I need to get the block within ** ** with grep, which code grep command do I need to find that specific block of output?
I've tried with
adb shell dumpsys audio | grep {STREAM_MUSIC:,STREAM_ALARM} and
returns nothing adb shell dumpsys audio | grep -w STREAM_MUSIC
returns only the first line
If you can use awk, you can do this:
awk '/- STREAM/ {f=0} /- STREAM_MUSIC:/ {f=1} f'
- STREAM_MUSIC:
Muted: false
Min: 0
Max: 15
Current: 2 (speaker): 12, 4000000 (usb_headset): 3, 40000000 (default): 8
Devices: speaker

Clustering Dockerized Elasticsearch with multiple Docker Host

Trying to make it clustering with docker compose.
I have two elasticsearch docker containers which are deployed in different Docker Hosts.
docker version: 18.06.3-ce
elasticsearch : 6.5.2
docker-compose.yml for docker-container-1
services:
elasticsearch:
restart: always
hostname: elasticsearch
image: docker-elk/elasticsearch:1.0.0
build:
context: elasticsearch
dockerfile: Dockerfile
environment:
discovery.type: zen
ports:
- 9200:9200
- 9300:9300
env_file:
- ./elasticsearch/elasticsearch.env
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
docker-compose.yml for docker-container-2
services:
elasticsearch:
restart: always
hostname: elasticsearch
image: docker-elk/elasticsearch:1.0.0
build:
context: elasticsearch
dockerfile: Dockerfile
environment:
discovery.type: zen
ports:
- 9200:9200
- 9300:9300
env_file:
- ./elasticsearch/elasticsearch.env
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
elasticsearch.yml on the elasticsearch-docker-container-1 on the Docker-Host 1
xpack.security.enabled: true
cluster.name: es-cluster
node.name: es1
network.host: 0.0.0.0
node.master: true
node.data: true
transport.tcp.port: 9300
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
discovery.zen.minimum_master_nodes: 2
gateway.recover_after_nodes: 1
discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
network.publish_host: host1
elasticsearch.yml on the elasticsearch-docker-container-2 on the Docker-Host 2
xpack.security.enabled: true
cluster.name: es-cluster
node.name: es2
network.host: 0.0.0.0
node.master: true
node.data: true
transport.tcp.port: 9300
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
discovery.zen.minimum_master_nodes: 2
gateway.recover_after_nodes: 1
discovery.zen.ping.unicast.hosts: ["host1:9300", "host2:9300","host1:9200", "host2:9200"]
network.publish_host: host2
Below is the result of GET /_cluster/health?pretty and it shows that there is only one node.
{
"cluster_name" : "dps_geocluster",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 33,
"active_shards" : 33,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 30,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 52.38095238095239
}
According to the document below at least three elasticsearch nodes are required.
https://www.elastic.co/guide/en/elasticsearch/reference/6.5/modules-node.html
Each elasticsearch container should be at different Docker host?
The following was the cause of error. After increasing the value of vm.max_map_count into 262144 with sysctl, it works fine.
elasticsearch_1 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Now number_of_nodes is 2.
{
"cluster_name" : "es-cluster",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 35,
"active_shards" : 37,
"relocating_shards" : 0,
"initializing_shards" : 2,
"unassigned_shards" : 31,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 52.85714285714286
}

Firebase 3.6.0 - Login Authentication - Swift 3

I've recently moved from Parse to Firebase due to it shutting down. However, I am now encountering many issues. I'm just testing logging in using this code in my AppDelegate class. Whenever I run this, I get Thread 1: signal SIGABRT on the class. How exactly do I fix this?
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
// Override point for customization after application launch.
FIRApp.configure()
FIRAuth.auth()?.signIn(withEmail: "test#test.com", password: "123456", completion: { (user, error) in
if user != nil {
print(error?.localizedDescription)
} else {
print(user?.email)
}
})
return true
}
Pod file
# Uncomment this line to define a global platform for your project
# platform :ios, '10.0'
target 'Fire' do
# Comment this line if you're not using Swift and don't want to use dynamic frameworks
use_frameworks!
# Pods for Fire
pod 'Firebase'
pod 'Firebase/Auth'
pod 'Firebase/Core'
pod 'Firebase/AdMob'
pod 'Firebase/Messaging'
pod 'Firebase/Database'
pod 'Firebase/Invites'
pod 'Firebase/DynamicLinks'
pod 'Firebase/Crash'
pod 'Firebase/RemoteConfig'
pod 'Firebase/AppIndexing'
pod 'Firebase/Storage'
target 'FireTests' do
inherit! :search_paths
# Pods for testing
end
target 'FireUITests' do
inherit! :search_paths
# Pods for testing
end
end
Current error in the console:
objc[7008]: Class PLBuildVersion is implemented in both /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/PrivateFrameworks/AssetsLibraryServices.framework/AssetsLibraryServices (0x118b4f910) and /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/PrivateFrameworks/PhotoLibraryServices.framework/PhotoLibraryServices (0x1188e2210). One of the two will be used. Which one is undefined.
2016-09-30 14:05:44.946754 Fire[7008:562611] bundleid: Natural-Development.Fire, enable_level: 0, persist_level: 0, propagate_with_activity: 0
2016-09-30 14:05:44.947828 Fire[7008:562611] subsystem: com.apple.siri, category: Intents, enable_level: 1, persist_level: 1, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0, enable_private_data: 0
2016-09-30 14:05:45.063385 Fire[7008:562826] subsystem: com.apple.UIKit, category: HIDEventFiltered, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
2016-09-30 14:05:45.063945 Fire[7008:562826] subsystem: com.apple.UIKit, category: HIDEventIncoming, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
2016-09-30 14:05:45.074818 Fire[7008:562825] subsystem: com.apple.BaseBoard, category: MachPort, enable_level: 1, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0, enable_private_data: 0
2016-09-30 14:05:45.089816 Fire[7008:562611] subsystem: com.apple.UIKit, category: StatusBar, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
2016-09-30 14:05:45.112479 Fire[7008:562611] subsystem: com.apple.SystemConfiguration, category: SCNetworkReachability, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 2, enable_private_data: 0
2016-09-30 14:05:45.114628 Fire[7008:562611] subsystem: com.apple.network, category: , enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 2, enable_private_data: 0
2016-09-30 14:05:45.114948 Fire[7008:562611] [] nw_resolver_create_dns_service_on_queue Starting host resolution app-measurement.com:0, flags 0x4000d000
2016-09-30 14:05:45.115464 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoSuchRecord(-65554) hostname=app-measurement.com. addr=::.0 ttl=60
2016-09-30 14:05:45.115 Fire[7008:562611] Configuring the default app.
2016-09-30 14:05:45.115705 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.232:0 ttl=375
2016-09-30 14:05:45.116125 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.221:0 ttl=375
2016-09-30 14:05:45.116941 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.117411 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.117625 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.221:0#0 = 144.131.80.232:0#0
2016-09-30 14:05:45.118892 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.216:0 ttl=375
2016-09-30 14:05:45.119183 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.119420 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.216:0#0 = 144.131.80.232:0#0
2016-09-30 14:05:45.119694 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.216:0#0 = 144.131.80.221:0#0
2016-09-30 14:05:45.119926 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.236:0 ttl=375
2016-09-30 14:05:45.120200 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.120391 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.236:0#0 = 144.131.80.232:0#0
2016-09-30 14:05:45.120679 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.236:0#0 = 144.131.80.221:0#0
2016-09-30 14:05:45.137009 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.236:0#0 = 144.131.80.216:0#0
2016-09-30 14:05:45.138700 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.251:0 ttl=375
2016-09-30 14:05:45.141072 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.150509 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.251:0#0 = 144.131.80.232:0#0
2016-09-30 14:05:45.151327 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.251:0#0 = 144.131.80.221:0#0
2016-09-30 14:05:45.152173 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.251:0#0 = 144.131.80.216:0#0
2016-09-30 14:05:45.152589 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.251:0#0 = 144.131.80.236:0#0
2016-09-30 14:05:45.152901 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.227:0 ttl=375
2016-09-30 14:05:45.153323 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.153781 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.227:0#0 = 144.131.80.232:0#0
2016-09-30 14:05:45.154232 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.227:0#0 = 144.131.80.221:0#0
2016-09-30 14:05:45.154963 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.227:0#0 = 144.131.80.216:0#0
2016-09-30 14:05:45.155176 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.227:0#0 = 144.131.80.236:0#0
2016-09-30 14:05:45.155547 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.227:0#0 = 144.131.80.251:0#0
2016-09-30 14:05:45.155937 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.247:0 ttl=375
2016-09-30 14:05:45.156468 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.156915 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.247:0#0 = 144.131.80.232:0#0
2016-09-30 14:05:45.157129 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.247:0#0 = 144.131.80.221:0#0
2016-09-30 14:05:45.157679 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.247:0#0 = 144.131.80.216:0#0
2016-09-30 14:05:45.157 Fire[7008:562611] Firebase Crash Reporting: Successfully enabled
2016-09-30 14:05:45.158122 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.247:0#0 = 144.131.80.236:0#0
2016-09-30 14:05:45.158717 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.247:0#0 = 144.131.80.251:0#0
2016-09-30 14:05:45.159778 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.247:0#0 = 144.131.80.227:0#0
2016-09-30 14:05:45.160491 Fire[7008:562825] [] nw_resolver_host_resolve_callback flags=0x3 ifindex=0 error=NoError(0) hostname=app-measurement.com. addr=144.131.80.217:0 ttl=375
2016-09-30 14:05:45.161141 Fire[7008:562846] subsystem: com.apple.libsqlite3, category: logging, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0
2016-09-30 14:05:45.161647 Fire[7008:562847] [] tcp_connection_create_with_endpoint_and_parameters 1 play.googleapis.com 443
2016-09-30 14:05:45.162382 Fire[7008:562825] [] nw_host_stats_add_src recv too small, received 24, expected 28
2016-09-30 14:05:45.163649 Fire[7008:562847] [] tcp_connection_start 1 starting
2016-09-30 14:05:45.164405 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.217:0#0 = 144.131.80.232:0#0
2016-09-30 14:05:45.165486 Fire[7008:562611] subsystem: com.apple.securityd, category: OSStatus, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 2, enable_private_data: 0
2016-09-30 14:05:45.165396 Fire[7008:562847] [] nw_connection_create creating connection to play.googleapis.com:443
2016-09-30 14:05:45.166691 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.217:0#0 = 144.131.80.221:0#0
2016-09-30 14:05:45.166: <FIRInstanceID/WARNING> Failed to remove checkin auth credentials from Keychain Error Domain=com.google.iid Code=-34018 "(null)"
2016-09-30 14:05:45.167485 Fire[7008:562848] [] tcp_connection_create_with_endpoint_and_parameters 2 plus.google.com 443
2016-09-30 14:05:45.168079 Fire[7008:562847] [] tcp_connection_start starting tc_nwconn=0x7f95227046a0
2016-09-30 14:05:45.168586 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.217:0#0 = 144.131.80.216:0#0
2016-09-30 14:05:45.168951 Fire[7008:562848] [] tcp_connection_start 2 starting
2016-09-30 14:05:45.169: <FIRInstanceID/WARNING> Error failed to remove all tokens from keychain Error Domain=com.google.iid Code=-34018 "(null)"
2016-09-30 14:05:45.169287 Fire[7008:562847] [] __nw_connection_start_block_invoke 1 starting
2016-09-30 14:05:45.169565 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.217:0#0 = 144.131.80.236:0#0
2016-09-30 14:05:45.169854 Fire[7008:562848] [] nw_connection_create creating connection to plus.google.com:443
2016-09-30 14:05:45.170508 Fire[7008:562847] [] nw_endpoint_handler_start [1 play.googleapis.com:443 initial path (null)]
2016-09-30 14:05:45.171: <FIRInstanceID/WARNING> FIRInstanceID AppDelegate proxy enabled, will swizzle app delegate remote notification handlers. To disable add "FirebaseAppDelegateProxyEnabled" to your Info.plist and set it to NO
2016-09-30 14:05:45.171: <FIRInstanceID/WARNING> Failed to fetch APNS token Error Domain=com.firebase.iid Code=1001 "(null)"
202016-09-30 14:05:45.174 Fire[7008:562611] A reversed client ID should be added as a URL scheme to enable Google sign-in.
16-09-30 14:05:45.171034 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.217:0#0 = 144.131.80.251:0#0
2016-09-30 14:05:45.186541 Fire[7008:562848] [] tcp_connection_start starting tc_nwconn=0x7f9522406890
2016-09-30 14:05:45.186901 Fire[7008:562847] [] nw_connection_endpoint_report [1 play.googleapis.com:443 initial path (null)] reported event path:start
2016-09-30 14:05:45.188 Fire[7008:] <FIRAnalytics/INFO> Firebase Analytics v.3402000 started
2016-09-30 14:05:45.189 Fire[7008:] <FIRAnalytics/INFO> To enable debug logging set the following application argument: -FIRAnalyticsDebugEnabled (see google link)
2016-09-30 14:05:45.191242 Fire[7008:562817] [] tcp_connection_create_with_endpoint_and_parameters 3 device-provisioning.googleapis.com 443
2016-09-30 14:05:45.192630 Fire[7008:562825] [] sa_dst_compare_internal 144.131.80.217:0#0 = 144.131.80.227:0#0
2016-09-30 14:05:45.195343 Fire[7008:562847] [] nw_endpoint_handler_path_change [1 play.googleapis.com:443 waiting path (satisfied)]
2016-09-30 14:05:45.196090 Fire[7008:562817] [] tcp_connection_start 3 starting
2016-09-30 14:05:45.196 Fire[7008:562611] *** Terminating app due to uncaught exception 'com.firebase.appinvite', reason: 'App Invite configuration failed.'
*** First throw call stack:
(
0 CoreFoundation 0x0000000110f4b34b __exceptionPreprocess + 171
1 libobjc.A.dylib 0x000000011058f21e objc_exception_throw + 48
2 CoreFoundation 0x0000000110fb4265 +[NSException raise:format:] + 197
3 Fire 0x000000010c31cc93 -[GINInvite(FIRApp) configureAppInvite:] + 978
4 Fire 0x000000010c31c892 +[GINInvite(FIRApp) receivedReadyToConfigureNotification:] + 154
5 CoreFoundation 0x0000000110ee919c __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 12
6 CoreFoundation 0x0000000110ee909b _CFXRegistrationPost + 427
7 CoreFoundation 0x0000000110ee8e02 ___CFXNotificationPost_block_invoke + 50
8 CoreFoundation 0x0000000110eabea2 -[_CFXNotificationRegistrar find:object:observer:enumerator:] + 2018
9 CoreFoundation 0x0000000110eaaf3b _CFXNotificationPost + 667
10 Foundation 0x000000011005713b -[NSNotificationCenter postNotificationName:object:userInfo:] + 66
11 Fire 0x000000010c22414a +[FIRApp sendNotificationsToSDKs:] + 296
12 Fire 0x000000010c222fee +[FIRApp configureDefaultAppWithOptions:sendingNotifications:] + 324
13 Fire 0x000000010c222cfb +[FIRApp configure] + 302
14 Fire 0x000000010c1ddda4 _TFC4Fire11AppDelegate11applicationfTCSo13UIApplication29didFinishLaunchingWithOptionsGSqGVs10DictionaryVSC29UIApplicationLaunchOptionsKeyP____Sb + 100
15 Fire 0x000000010c1de774 _TToFC4Fire11AppDelegate11applicationfTCSo13UIApplication29didFinishLaunchingWithOptionsGSqGVs10DictionaryVSC29UIApplicationLaunchOptionsKeyP____Sb + 180
16 UIKit 0x000000011136568e -[UIApplication _handleDelegateCallbacksWithOptions:isSuspended:restoreState:] + 290
17 UIKit 0x0000000111367013 -[UIApplication _callInitializationDelegatesForMainScene:transitionContext:] + 4236
18 UIKit 0x000000011136d3b9 -[UIApplication _runWithMainScene:transitionContext:completion:] + 1731
19 UIKit 0x000000011136a539 -[UIApplication workspaceDidEndTransaction:] + 188
20 FrontBoardServices 0x0000000114ebb76b __FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK__ + 24
21 FrontBoardServices 0x0000000114ebb5e4 -[FBSSerialQueue _performNext] + 189
22 FrontBoardServices 0x0000000114ebb96d -[FBSSerialQueue _performNextFromRunLoopSource] + 45
23 CoreFoundation 0x0000000110ef0311 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17
24 CoreFoundation 0x0000000110ed559c __CFRunLoopDoSources0 + 556
25 CoreFoundation 0x0000000110ed4a86 __CFRunLoopRun + 918
26 CoreFoundation 0x0000000110ed4494 CFRunLoopRunSpecific + 420
27 UIKit 0x0000000111368db6 -[UIApplication _run] + 434
28 UIKit 0x000000011136ef34 UIApplicationMain + 159
29 Fire 0x000000010c1dfccf main + 111
30 libdyld.dylib 0x00000001135aa68d start + 1
31 ??? 0x0000000000000001 0x0 + 1
)
2016-09-30 14:05:45.196440 Fire[7008:562825] [] sa_dst_compare_internal 144libc++abi.dylib: terminating with uncaught exception of type NSException
.131.80.217:0#0 = 144.131.80.247:0#0
(lldb)
There may be a couple of things that went wrong,
You did not install firebase/auth pod.
The firapp is not done configuring itself and you are calling a firebase function right away.
I think is most likely to be the first one. In that case open the podfile and add pod 'Firebase/Auth'
This question was solved by not installing all of Firebase's features/pods.

Convolution issue in Caffe

i have 96x96 pixel images in grayscale format stored in HDF5 files. i am trying to do multi output regression using caffe however convolution is not working. What exactly is the problem here? Why is convolutions not working?
I0122 17:18:39.474860 5074 net.cpp:67] Creating Layer fkp
I0122 17:18:39.474889 5074 net.cpp:356] fkp -> data
I0122 17:18:39.474930 5074 net.cpp:356] fkp -> label
I0122 17:18:39.474967 5074 net.cpp:96] Setting up fkp
I0122 17:18:39.474987 5074 hdf5_data_layer.cpp:57] Loading filename from train.txt
I0122 17:18:39.475103 5074 hdf5_data_layer.cpp:69] Number of files: 1
I0122 17:18:39.475131 5074 hdf5_data_layer.cpp:29] Loading HDF5 filefacialkp-train.hd5
I0122 17:18:40.337786 5074 hdf5_data_layer.cpp:49] Successully loaded 4934 rows
I0122 17:18:40.337862 5074 hdf5_data_layer.cpp:81] output data size: 100,9216,1,1
I0122 17:18:40.337906 5074 net.cpp:103] Top shape: 100 9216 1 1 (921600)
I0122 17:18:40.337929 5074 net.cpp:103] Top shape: 100 30 1 1 (3000)
I0122 17:18:40.337971 5074 net.cpp:67] Creating Layer conv1
I0122 17:18:40.338001 5074 net.cpp:394] conv1 <- data
I0122 17:18:40.338069 5074 net.cpp:356] conv1 -> conv1
I0122 17:18:40.338109 5074 net.cpp:96] Setting up conv1
F0122 17:18:40.599761 5074 blob.cpp:13] Check failed: height >= 0 (-3 vs. 0)
My prototxt layer file is like this
name: "LogReg"
layers {
top: "data"
top: "label"
name: "fkp"
type: HDF5_DATA
hdf5_data_param {
source: "train.txt"
batch_size: 100
}
include {
phase: TRAIN
}
}
layers {
bottom: "data"
top: "conv1"
name: "conv1"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 64
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layers {
bottom: "conv1"
top: "pool1"
name: "pool1"
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: "pool1"
top: "conv2"
name: "conv2"
type: CONVOLUTION
blobs_lr: 1
blobs_lr: 2
convolution_param {
num_output: 256
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layers {
bottom: "conv2"
top: "pool2"
name: "pool2"
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: "pool2"
top: "ip1"
name: "ip1"
type: INNER_PRODUCT
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layers {
bottom: "ip1"
top: "ip1"
name: "relu1"
type: RELU
}
layers {
bottom: "ip1"
top: "ip2"
name: "ip2"
type: INNER_PRODUCT
blobs_lr: 1
blobs_lr: 2
inner_product_param {
num_output: 30
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layers {
bottom: "ip2"
bottom: "label"
top: "loss"
name: "loss"
type: EUCLIDEAN_LOSS
}
the lines
I0122 17:18:40.337906 5074 net.cpp:103] Top shape: 100 9216 1 1 (921600)
I0122 17:18:40.337929 5074 net.cpp:103] Top shape: 100 30 1 1 (3000)
suggest that your input data is not in the correct shape. For an input of 100 batchs of 96x96 grey-scale image the shape should be: 100 1 96 96.
Try to change this. (my guess is that for shape: N C H W, where N number of batches, c channels, h height, w weight)

Attach generic records (like chipset) to a model (motherboard), without creating new record for every single record (motherboard)?

I'm building a PC building app in Ruby on Rails. The app will aim to provide customer with ability to build their own computer
I've run into a problem while laying out the basic data structure. This is what I basically have so far:
rails g model manufacturer name:string
rails g model cpu_socket name:string
rails g model cpu_architecture name:string
rails g model cpu_microarchitecture name:string cpu_architecture:references manufacturer:references
rails g model cpu model:string cpu_microarchitecture:references
rails g model motherboard_chipset name:string manufacturer:references
rails g model memory_type name:string
rails g model memory_socket name:string memory_type:references
But now things get complicated.
I don't know how to model the motherboard model.
The following should be pseudo-output:
---------[snip]---------
- #<Motherboard id: 158679, memory_max_size: 68719476736, nvidia_sli: 3, amd_crossfirex: 4>
- #<Chipset id: 14, name: 'Intel X99'>
- #<CpuSocket id: 4, type: 'LGA2011'>
- #<MemorySocket position: 0, type: '288-pin DDR4 DIMM', max_size: 8589934592, frequency: 2133, ecc: false>
- #<MemorySocket position: 1, type: '288-pin DDR4 DIMM', max_size: 8589934592, frequency: 2133, ecc: false>
- #<MemorySocket position: 2, type: '288-pin DDR4 DIMM', max_size: 8589934592, frequency: 2133, ecc: false>
- #<MemorySocket position: 3, type: '288-pin DDR4 DIMM', max_size: 8589934592, frequency: 2133, ecc: false>
- #<SataPort position: 0, type: 'SATA 6G'>
- #<SataPort position: 1, type: 'SATA 6G'>
- #<SataPort position: 2, type: 'SATA 6G'>
- #<SataPort position: 3, type: 'SATA 6G'>
- #<SataPort position: 4, type: 'SATA 6G'>
- #<SataPort position: 5, type: 'SATA 6G'>
- #<SataPort position: 6, type: 'SATA 6G'>
- #<SataPort position: 7, type: 'SATA 6G'>
- #<PciSlot position: 0, type: 'PCI-E 3.x', speed: 16>
- #<PciSlot position: 1, type: 'PCI-E 3.x', speed: 16>
- #<PciSlot position: 2, type: 'PCI-E 3.x', speed: 16>
- #<PciSlot position: 3, type: 'PCI-E 3.x', speed: 16>
- #<PciSlot position: 4, type: 'PCI-E 3.x', speed: 4>
...
---------[snip]---------
How do I attach these to the Motherboard? I believe it's safe to assume there will only be one chipset (therefore chipset:references), but what about the CpuSockets, MemorySockets, SataPorts and PciSlots? Am I just too scared and this is something even a toddler could answer?
I've made a few apps (<10) in Rails already, but this level of ActiveRecordry is quite new to me.
Of course, I could do it using :json or :hash, but I believe there is a way to do it a little bit more ActiveRecord-ish way...
Up until now you had a one-to-many relationship with your models. With Motherboard, you have a many to many relationship.
Given two tables
| *motherboard* | | * sataport * |
| Intel | | SATA 6G |
| AMD | | SATA 3G |
If Intel can have Sata 6G and 3G and AMD can also have SATA 6G and 3G, you will have to create a third table called motherboard_sataport with
| motherboard_id | sataport_id |
| 1 | 2 |
| 1 | 1 |
etc.
These are called JoinTables and rails can generate one for you. First create the motherboard table, then the sataport table. Then create a migration using
rails g migration CreateJoinTableMotherboardSataport motherboard sataport
More info on rails migrations here and also go through the different types of associations in rails here

Resources