Unable to access elasticsearch running in docker on mac OS X - docker

I went through this SO ques but still couldn't make it work. I followed this elasticsearch tutorial to run it in dev mode with:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.4
Elasticsearch starts but still I am unable to reach it with curl or browser. These are the logs:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"
docker.elastic.co/elasticsearch/elasticsearch:6.2.4
[2018-06-08T08:35:59,131][INFO ][o.e.n.Node ] [] initializing ...
[2018-06-08T08:35:59,209][INFO ][o.e.e.NodeEnvironment ] [vMpk2HC] using [1] data paths, mounts [[/ (overlay)]], net usable_space [31.7gb], net total_space [37.2gb], types [overlay]
[2018-06-08T08:35:59,209][INFO ][o.e.e.NodeEnvironment ] [vMpk2HC] heap size [1007.3mb], compressed ordinary object pointers [true]
[2018-06-08T08:35:59,211][INFO ][o.e.n.Node ] node name [vMpk2HC] derived from node ID [vMpk2HCTQNKxTMFmkE-0oA]; set [node.name] to override
[2018-06-08T08:35:59,211][INFO ][o.e.n.Node ] version[6.2.4], pid[1], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/4.9.0-0.bpo.2-amd64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_161/25.161-b14]
[2018-06-08T08:35:59,212][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.muZEBoID, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
[2018-06-08T08:36:01,407][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [aggs-matrix-stats]
[2018-06-08T08:36:01,407][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [analysis-common]
[2018-06-08T08:36:01,407][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [ingest-common]
[2018-06-08T08:36:01,407][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [lang-expression]
[2018-06-08T08:36:01,407][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [lang-mustache]
[2018-06-08T08:36:01,407][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [lang-painless]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [mapper-extras]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [parent-join]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [percolator]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [rank-eval]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [reindex]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [repository-url]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [transport-netty4]
[2018-06-08T08:36:01,408][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded module [tribe]
[2018-06-08T08:36:01,409][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [ingest-geoip]
[2018-06-08T08:36:01,409][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [ingest-user-agent]
[2018-06-08T08:36:01,409][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-core]
[2018-06-08T08:36:01,409][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-deprecation]
[2018-06-08T08:36:01,409][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-graph]
[2018-06-08T08:36:01,410][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-logstash]
[2018-06-08T08:36:01,410][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-ml]
[2018-06-08T08:36:01,410][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-monitoring]
[2018-06-08T08:36:01,410][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-security]
[2018-06-08T08:36:01,410][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-upgrade]
[2018-06-08T08:36:01,410][INFO ][o.e.p.PluginsService ] [vMpk2HC] loaded plugin [x-pack-watcher]
[2018-06-08T08:36:05,966][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/104] [Main.cc#128] controller (64 bit): Version 6.2.4 (Build 524e7fe231abc1) Copyright (c) 2018 Elasticsearch BV
[2018-06-08T08:36:06,529][INFO ][o.e.d.DiscoveryModule ] [vMpk2HC] using discovery type [single-node]
[2018-06-08T08:36:07,313][INFO ][o.e.n.Node ] initialized
[2018-06-08T08:36:07,313][INFO ][o.e.n.Node ] [vMpk2HC] starting ...
[2018-06-08T08:36:07,457][INFO ][o.e.t.TransportService ] [vMpk2HC] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2018-06-08T08:36:07,488][WARN ][o.e.b.BootstrapChecks ] [vMpk2HC] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-06-08T08:36:07,520][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [vMpk2HC] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2018-06-08T08:36:07,520][INFO ][o.e.n.Node ] [vMpk2HC] started
[2018-06-08T08:36:07,698][INFO ][o.e.g.GatewayService ] [vMpk2HC] recovered [0] indices into cluster_state
[2018-06-08T08:36:08,241][INFO ][o.e.l.LicenseService ] [vMpk2HC] license [70cc45f9-4b48-4c75-a3db-9e469d607f3e] mode [basic] - valid
[2018-06-08T08:36:17,455][INFO ][o.e.c.m.MetaDataCreateIndexService] [vMpk2HC] [.monitoring-es-6-2018.06.08] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0], mappings [doc]
[2018-06-08T08:36:17,751][INFO ][o.e.c.r.a.AllocationService] [vMpk2HC] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-6-2018.06.08][0]] ...]).
I installed docker with homebrew using brew install docker docker-compose docker-machine xhyve docker-machine-driver-xhyve and elasticsearch with brew install elasticsearch
If I try to install elasticsearch with homebrew and start it on my local machine, it works. But docker container doesn't.

I am not sure what was the problem. But I uninstalled my docker and installed it again with the dmg file provided by docker.
It works now and I am finally able to access elasticsearch :)

Looks good to me, no errors in the logs. You need to connect using the IP address you can see in the logs and all will be fine
curl http://172.17.0.2:9200
If you log into the container, you can access your ES with curl http://localhost:9200, but since you're on the host, you need to use the IP address that the container published ES to.

Related

Unable to cUrl docker container

I am following the tutorial https://docker-curriculum.com/ and running the ElasticSearch image on Windows 10. But the curl is not responding to the default url the is mentioned while running the docker image.
Any help is appreciated.
I ran the following commands.
$ docker pull docker.elastic.co/elasticsearch/elasticsearch:6.3.2
$ docker run -d --name es -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.3.2
e8be84174fec0bcb796265d93e67ec70b6ac77a54a4ff65be7a51f9a64037f43
Listed the containers
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
572b79717617 docker.elastic.co/elasticsearch/elasticsearch:6.3.2 "/usr/local/bin/dock…" 12 hours ago Up 12 hours 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp es
9ff0b5b40306 acerab/foodtrucks-web "bash" 14 hours ago Up 14 hours 5000/tcp
Listed the Elastic Search Container container es logs
$ docker container logs es
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2023-01-19T03:57:00,940][INFO ][o.e.n.Node ] [] initializing ...
[2023-01-19T03:57:01,007][INFO ][o.e.e.NodeEnvironment ] [-dsoNEx] using [1] data paths, mounts [[/ (overlay)]], net usable_space [233.8gb], net total_space [250.9gb], types [overlay]
[2023-01-19T03:57:01,007][INFO ][o.e.e.NodeEnvironment ] [-dsoNEx] heap size [989.8mb], compressed ordinary object pointers [true]
[2023-01-19T03:57:01,010][INFO ][o.e.n.Node ] [-dsoNEx] node name derived from node ID [-dsoNExNSMmYeeQDYc3-Og]; set [node.name] to override
[2023-01-19T03:57:01,010][INFO ][o.e.n.Node ] [-dsoNEx] version[6.3.2], pid[1], build[default/tar/053779d/2018-07-20T05:20:23.451332Z], OS[Linux/5.10.102.1-microsoft-standard-WSL2/amd64], JVM["Oracle Corporation"/OpenJDK 64-Bit Server VM/10.0.2/10.0.2+13]
[2023-01-19T03:57:01,010][INFO ][o.e.n.Node ] [-dsoNEx] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.mV5lGAl6, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2023-01-19T03:57:03,139][INFO ][o.e.p.PluginsService ] [-dsoNEx] loaded plugin [ingest-user-agent]
[2023-01-19T03:57:06,403][INFO ][o.e.x.s.a.s.FileRolesStore] [-dsoNEx] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2023-01-19T03:57:06,987][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/118] [Main.cc#109] controller (64 bit): Version 6.3.2 (Build 903094f295d249) Copyright (c) 2018 Elasticsearch BV
[2023-01-19T03:57:07,660][INFO ][o.e.d.DiscoveryModule ] [-dsoNEx] using discovery type [single-node]
[2023-01-19T03:57:08,732][INFO ][o.e.n.Node ] [-dsoNEx] initialized
[2023-01-19T03:57:08,732][INFO ][o.e.n.Node ] [-dsoNEx] starting ...
[2023-01-19T03:57:08,944][INFO ][o.e.t.TransportService ] [-dsoNEx] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2023-01-19T03:57:08,962][WARN ][o.e.b.BootstrapChecks ] [-dsoNEx] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2023-01-19T03:57:08,998][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [-dsoNEx] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2023-01-19T03:57:08,999][INFO ][o.e.n.Node ] [-dsoNEx] started
But when I run CUrl, I am not able to connect.
$ curl 0.0.0.0:9200
When I issue the cUrl on 127.0.0.2.9200, I see the response
curl 127.0.0.2:9200
StatusCode : 200
StatusDescription : OK
Content : {
"name" : "-dsoNEx",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "UzUGppOxQ5ePcxLNB49HvA",
"version" : {
"number" : "6.3.2",
"build_flavor" : "default",
"build_type" : "ta...
RawContent : HTTP/1.1 200 OK
Content-Length: 494
Content-Type: application/json; charset=UTF-8
{
"name" : "-dsoNEx",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "UzUGppOxQ5ePcxLNB49HvA",
"vers...
Forms : {}
Headers : {[Content-Length, 494], [Content-Type, application/json; charset=UTF-8]}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 494
Connecting to 0.0.0.0 does not make any sense, as it is not the address of your device. The 0.0.0.0:9200->9200/tcp output from docker means that the service is listening on all interfaces (i.e., 0.0.0.0). Thus you can connect via 127.0.0.1 or your public IP address from your network card, for instance.

Why starting Artifactory fails

I want to install artifactory on an ubuntu docker image manually without using the artifactory image from docker hub.
what I have done so far is :
Get an ubuntu image with JDK 11 installed.
I used apt-get I have installed the artifactory.
but when starting the artifactory service with service start artifactory I get the following logs with errors:
root#f01a31f43dc0:/# service artifactory start
2021-12-15T23:57:37.545Z [shell] [INFO ] [] [artifactory:81 ] [main] - Starting Artifactory tomcat as user artifactory...
2021-12-15T23:57:37.590Z [shell] [INFO ] [] [installerCommon.sh:1519 ] [main] - Checking open files and processes limits
2021-12-15T23:57:37.637Z [shell] [INFO ] [] [installerCommon.sh:1522 ] [main] - Current max open files is 1048576
2021-12-15T23:57:37.694Z [shell] [INFO ] [] [installerCommon.sh:1533 ] [main] - Current max open processes is unlimited
.shared.security value is of wrong data type. Correct type should be !!map
.shared.node value is of wrong data type. Correct type should be !!map
.shared.database value is of wrong data type. Correct type should be !!map
yaml validation failed
2021-12-15T23:57:37.798Z [shell] [WARN ] [] [installerCommon.sh:721 ] [main] - System.yaml validation failed
Database connection check failed Could not determine database type
2021-12-15T23:57:38.172Z [shell] [INFO ] [] [installerCommon.sh:3381 ] [main] - Setting JF_SHARED_NODE_ID to f01a31f43dc0
2021-12-15T23:57:38.424Z [shell] [INFO ] [] [installerCommon.sh:3381 ] [main] - Setting JF_SHARED_NODE_IP to 172.17.0.2
2021-12-15T23:57:38.652Z [shell] [INFO ] [] [installerCommon.sh:3381 ] [main] - Setting JF_SHARED_NODE_NAME to f01a31f43dc0
2021-12-15T23:57:39.348Z [shell] [INFO ] [] [artifactoryCommon.sh:186 ] [main] - Using Tomcat template to generate : /opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml
2021-12-15T23:57:39.711Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.port||8081} to default value : 8081
2021-12-15T23:57:39.959Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.connector.sendReasonPhrase||false} to default value : false
2021-12-15T23:57:40.244Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.connector.maxThreads||200} to default value : 200
2021-12-15T23:57:40.705Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.port||8091} to default value : 8091
2021-12-15T23:57:40.997Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.maxThreads||5} to default value : 5
2021-12-15T23:57:41.278Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.acceptCount||5} to default value : 5
2021-12-15T23:57:41.751Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${access.http.port||8040} to default value : 8040
2021-12-15T23:57:42.041Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${access.tomcat.connector.sendReasonPhrase||false} to default value : false
2021-12-15T23:57:42.341Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${access.tomcat.connector.maxThreads||50} to default value : 50
2021-12-15T23:57:42.906Z [shell] [INFO ] [] [systemYamlHelper.sh:527 ] [main] - Resolved JF_PRODUCT_HOME (/opt/jfrog/artifactory) from environment variable
2021-12-15T23:57:43.320Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${shared.tomcat.workDir||/opt/jfrog/artifactory/var/work/artifactory/tomcat} to default value : /opt/jfrog/artifact
ory/var/work/artifactory/tomcat
========================
JF Environment variables
========================
JF_SHARED_NODE_ID : f01a31f43dc0
JF_SHARED_NODE_IP : 172.17.0.2
JF_ARTIFACTORY_PID : /var/run/artifactory.pid
JF_SYSTEM_YAML : /opt/jfrog/artifactory/var/etc/system.yaml
JF_PRODUCT_HOME : /opt/jfrog/artifactory
JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES : jfrt,jfac,jfmd,jffe,jfob
JF_SHARED_NODE_NAME : f01a31f43dc0
2021-12-15T23:57:45.827Z [shell] [ERROR] [] [installerCommon.sh:3267 ] [main] - ##############################################################################
2021-12-15T23:57:45.890Z [shell] [ERROR] [] [installerCommon.sh:3268 ] [main] - Ownership mismatch. You can try executing following instruction and do a restart
2021-12-15T23:57:45.959Z [shell] [ERROR] [] [installerCommon.sh:3269 ] [main] - Command : chown -R artifactory:artifactory /opt/jfrog/artifactory/var/log
2021-12-15T23:57:46.029Z [shell] [ERROR] [] [installerCommon.sh:3270 ] [main] - ##############################################################################
I'm not sure what I'm messing in this installation process.
The error is clear that there is permission issue on /opt/jfrog/artifactory/var/log folder and you should be running the chown -R artifactory:artifactory /opt/jfrog/artifactory/var/log command to solve it

Artifactory oss install with Docker

I'm trying to install Artifactory oss using docker.
I'm running ubuntu 18.04 and docker 19.03.8
I followed the JFrog installation guide https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-DockerInstallation
I did all the steps. Except that chown -R 1030:1030 $JFROG_HOME/artifactory/var must be run with sudo.
The container start. But when I'm going to http://myhost:8082/ui/ I only see a page with the JFrog logo displaying with a zoomin/zoomout effect.
I the logs I see
################################################################
### All services started successfully in 116.053 seconds ###
################################################################
2020-03-26T07:27:05.070Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main] - Server configuration reloaded on localhost:8046
2020-03-26T07:27:05.070Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2020-03-26T07:27:05.071Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2020-03-26T07:27:05.109Z [jfac ] [INFO ] [ ] [alConfigurationServiceBase:182] [c-default-executor-1] - Loading configuration from db finished successfully
2020-03-26T07:27:07.104Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2020-03-26T07:27:07.105Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8046
2020-03-26T07:27:07.105Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2020-03-26T07:27:10.084Z [jfrou] [WARN ] [6ec6165e7fec2711] [ternal_topology_verifier.go:92] [main ] - failed pinging external node 'f461d2eebfe3' at address 'http://172.17.0.2:8082': Get http://172.17.0.2:8082/router/api/v1/system/ping: context deadline exceeded
The last line appears when I request the url in the browser.
What can I do ?
Edit:
I also tried with docker-compose also following the jfrog guide.
First run : artifactory is not starting !
After editing the .jfrog/artifactory/var/etc/system.yaml and chaning the 127.0.0.1 by my host name followed again with the config.sh artifactory is starting.
But same problem when accessing http://myhost:8082/ui/
I don't understand what's happening and why it is not working following the jfrog guides...
In my case, it turns out to be that my proxy settings are blocking the http client from contacting the local endpoint.
I updated docker-compose.yml to include no_proxy and noproxy environment variables and Artifactory runs without any complaint.
services:
artifactory:
environment:
- http_proxy=*********
- https_proxy=*********
- no_proxy=*********
- noproxy=172.16.0.0/12
image: docker.bintray.io/jfrog/artifactory-oss:latest
The solution was quite simple : try another browser !
With Edge it's not working.
With Firefox it's working ....

Unable to build app using dockers

I have setup of my application on DigitaOcean using dockers. It was working fine but few days back it stopped working. Whenever I want to build application and deploy it doesn't shows any progress.
By using following commands
docker-compose build && docker-compose stop && docker-compose up -d
systems stucks on the following output
db uses an image, skipping
elasticsearch uses an image, skipping
redis uses an image, skipping
Building app
It doesn't shows any furthur progress.
Following are the logs of docker-compose
db_1 | LOG: received smart shutdown request
db_1 | LOG: autovacuum launcher shutting down
db_1 | LOG: shutting down
db_1 | LOG: database system is shut down
db_1 | LOG: database system was shut down at 2018-01-10
02:25:36 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
redis_1 | 11264:C 26 Mar 15:20:17.028 # Failed opening the RDB
file root (in server root dir /run) for saving: Permission denied
redis_1 | 1:M 26 Mar 15:20:17.127 # Background saving error
redis_1 | 1:M 26 Mar 15:20:23.038 * 1 changes in 3600 seconds.
Saving...
redis_1 | 1:M 26 Mar 15:20:23.038 * Background saving started by pid 11265
elasticsearch | [2018-03-06T01:18:25,729][WARN ][o.e.b.BootstrapChecks ] [_IRIbyW] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch | [2018-03-06T01:18:28,794][INFO ][o.e.c.s.ClusterService ] [_IRIbyW] new_master {_IRIbyW}{_IRIbyWCSoaUaKOLN93Fzg}{TFK38PIgRT6Kl62mTGBORg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elasticsearch | [2018-03-06T01:18:28,835][INFO ][o.e.h.n.Netty4HttpServerTransport] [_IRIbyW] publish_address {172.17.0.4:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch | [2018-03-06T01:18:28,838][INFO ][o.e.n.Node ] [_IRIbyW] started
elasticsearch | [2018-03-06T01:18:29,104][INFO ][o.e.g.GatewayService ] [_IRIbyW] recovered [4] indices into cluster_state
elasticsearch | [2018-03-06T01:18:29,799][INFO ][o.e.c.r.a.AllocationService] [_IRIbyW] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[product_records][2]] ...]).
elasticsearch | [2018-03-07T16:11:18,449][INFO ][o.e.n.Node ] [_IRIbyW] stopping ...
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] stopped
elasticsearch | [2018-03-07T16:11:18,575][INFO ][o.e.n.Node ] [_IRIbyW] closing ...
elasticsearch | [2018-03-07T16:11:18,601][INFO ][o.e.n.Node ] [_IRIbyW] closed
elasticsearch | [2018-03-07T16:11:37,993][INFO ][o.e.n.Node ] [] initializing ...
WARNING: Connection pool is full, discarding connection: 'Ipaddress'
I am using postgres , redis, elasticsearch and sidekiq images in my rails application
But i have no clue where the things are going wrong.

How view solr logs from docker container

I want to view logs to check if a library is correct installed.
I use solr in a docker container.
How I can do that?
So, if you're using the official image, running it like this:
docker run --name my_solr -d -p 8983:8983 -t solr
you can see the logs with docker logs:
docker logs my_solr
These are my logs, for example:
Starting Solr 7.2.0
2018-01-10 11:05:29.618 INFO (main) [ ] o.e.j.s.Server jetty-9.3.20.v20170531
2018-01-10 11:05:30.570 INFO (main) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 7.2.0
2018-01-10 11:05:30.570 INFO (main) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in standalone mode on port 8983
2018-01-10 11:05:30.570 INFO (main) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: /opt/solr
2018-01-10 11:05:30.611 INFO (main) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2018-01-10T11:05:30.574Z
2018-01-10 11:05:30.662 INFO (main) [ ] o.a.s.c.SolrResourceLoader Using system property solr.solr.home: /opt/solr/server/solr
2018-01-10 11:05:30.735 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /opt/solr/server/solr/solr.xml
2018-01-10 11:05:31.280 INFO (main) [ ] o.a.s.c.SolrResourceLoader [null] Added 0 libs to classloader, from paths: []
2018-01-10 11:05:32.918 INFO (main) [ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /opt/solr/server/solr
2018-01-10 11:05:33.108 INFO (main) [ ] o.e.j.s.Server Started #5029ms

Resources