Issues installing jfrog container registry via docker image - docker

Im currently trying to install the Jfrog container registry via the docker image and it errors just as it reaches the ui after setup. My process of installing is as follows
docker pull docker.bintray.io/jfrog/artifactory-jcr:latest
docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-jcr:latest
I navigate to localhost:8081/artifactory and after the setup image it redirects to localhost:8082/ui/ and shows page not found. Im not sure why the port changes and I have looked at the documentation for installing and there isnt anything about the port change. changing the port back to 8081 just shows a HTTP Status 404 – Not Found.
Im on docker for windows, looking to test this out. Any ideas what im doing wrong?

Artifactory internal architecture has changed, and there are separate micro services for Artifactory and its UI. This is done via the JFrog router, which listens on port 8082.
If you follow the Docker installation documentation, you can see you need to also expose port 8082.
docker run --name artifactory -d -p 8081:8081 -p 8082:8082 docker.bintray.io/jfrog/artifactory-jcr:latest
You can also drop the port 8081 and stick to 8082 only. 8081 allows for direct access to Artifactory (bypassing the jfrog router) for better performance on high load systems.

Related

How to "expose jenkins" to the internal network

I've installed a jenkins docker image on my CentOS7. When I type
docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts
There is an error :Jenkins Initial Setup is required
Since my CentOS7 does not have a graphical interface, do you know how do I "expose" it in order to see the renderized jenkins interface in another machine ?
I tried reading some documentation (https://www.jenkins.io/doc/book/security/services/) but I really don't know where and how these configurations are made.
Allow external access to port 8080 by opening your Firewalls if you have any. Then Simply access it like http://IP_OF_THE_JENKINS_SERVER:8080.

Drools Workbench docker container: Can't access the deployed server

I tried delploying kie workbench using docker command docker run -p 8080:8080 -p 8001:8001 -d --name drools-wb jboss/business-central-workbench-showcase:latest and kieserver using the docker command docker run -p 8180:8080 -d --name kie-server --link drools-wb:kie-wb jboss/kie-server-showcase:latest. I deployed a sample drl file to kie server using the business central. The screen image after deployment is as shown below.
The remote server is given as 172.17.0.3:8080. But when I try to test the deployment file using Postman the server is not responding.The requests are getting timed out.The two endpoint services I tried to access are http://172.17.0.3:8080/kie-server/services/rest/server/and http://172.17.0.3:8080/kie-server/services/rest/server/DemoRule_1.0.0-SNAPSHOT. First of all Iam not understanding why is it getting deployed in some remote server and not localhost. Secondly why is it not getting accessible. I even tried the kie server container endpoint http://localhost:8180/kie-server/services/rest/server/. But none of this works. Can someone help me understand the problem.
I found the answer for myself. The service was available at http://localhost:8180/kie-server/services/rest/server/containers/instances/DemoRule_1.0.0-SNAPSHOT. That's were the actual controller was available. Port 8080 was endpoint for wildfly server. The IP 172.17.0.3:8080 was related to docker container. It had nothing do with the controllers.

Docker - Web browser cannot connect to a running web app container on server

I have successfully built my web app image and ran a container on my server, which is an EC2 instance, with no error at all, but when I tried to access the web page it returned no connection, even though I accessed through the binded port of the host server. The build and run processes gave absolutely no error, either build error or connection error. I'm new to both Docker and AWS, so I'm not sure what could be the problem. Any help from you guys is really appreciated. Thanks a lot!
Here is my Dockerfile
FROM ubuntu
WORKDIR /usr/src/app
# install dependencies, nothing wrong
RUN ...
COPY...
#
# exposed ports
EXPOSE 5000
EXPOSE 5100
CMD ...
Docker build
$ sudo docker built -t demo-app
Docker run command
$ sudo docker run -it -p 8080:5000 -p 808:5100 --name demo-app-1 demo-app
I accessed through the binded port of the host server.
It's mean the application is running, and you're able to access using curl localhost:8080.
Now there are mainly two-issue if you're able to access the application after doing ssh to EC2 instance and verify the application responding on localhost of EC2.
Security group not allowing connection on the desired port, allow 8080 and the check
The instance is in private subnet, you can verify the instance.

Userland proxy error when launching docker image on Google Cloud Platfrom

I am trying to run a standard nginx container on one of my GCP VMs. When i run
docker run -it --rm -p 80:80 tiangolo/uwsgi-nginx-flask:python3.6
I get the following error:
Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
However it is a clean VM instance I created. During VM creation I also checked the http port to make sure port 80 is open (i need to add https - but this is my first deployment test).
The image does work locally. It seems to be a Google Cloud Platform configuring thing I guess.
it was my own stupid error.. sorry for asking the SO community...
so what did I do wrong.. I connected through the web client.. which means port 80 is already in use. causing all this havoc :(
so just ssh in and try again and it works.
I tried to reproduce the issue on my end, but I did not find any error. Here are the below steps I have taken.
First I spin up a Debian vm instance in the Google cloud platform and allowed incoming http in the firewall for that VM instance so that I could access the site from outside.
Then I installed docker in the VM instance. I followed this link.
After that, I made sure that http port is free in the VM instance. I used the below command.
netstat -an | egrep 'Proto|LISTEN'
You may check the link here.
At this point, I issued the docker command you provided.
docker run -it --rm -p 80:80 tiangolo/uwsgi-nginx-flask:python3.6
I did not get any error and I could access the nginx page.
“Hello World from Flask in a uWSGI Nginx Docker container with Python 3.6 (default)”
If you spin a new VM with the same docker version, do you have the same issue? What kind of image is your VM running?

Accessing Elasticsearch Docker with Dropwizard - Connection Refused

In short - Can I run Elasticsearch & Dropwizard app in separate docker containers and allow them to see each other?
I am running Elasticsearch 6.2.2 from Docker (on mac). using the command..
docker run -p 9200:9200 -p 9300:9300 -e "network.host=0.0.0.0" \
-e "http.port=9200" -e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
I can access Elasticsearch (POST & GET) fine using Postman directly on mac e.g
localhost:9200/testindex/_search
However when running a Dropwizard application from a different docker image which accesses the docker Elasticsearch instance, I get connection refused using same host and port (localhost 9200).
I have no problems at all when running the Dropwizard app direct from an IDE, only when its running from a docker image and accessing ES from a different image.
docker image -p 8080:8080 -p 8081:8081 testapp
Has anyone else had similar issues or solved this in the past?
I'm assuming it 'network' related and that connecting to localhost from one docker image will not be map to the other docker image
The issue you are facing is in the url you pass to the dropwizard container. As a container by default has its own networking, a value of localhost means the dropwizard container itself, not what you see as your local host from outside the container.
Please have a look at docker networking, how you can link two containers by name. I would suggest to check out docker-compose for multi-container setups on a local machine.
What would also work (but is not good practice) is to pass the dropwizard container the ip of your machine as elasticsearch host because you created a port mapping from your host into the elasticsearch container. But better have a look at compose to do it as it is supposed to be done.
For details how to use compose please have a look at this answer with a similar example.

Resources