SOnarqube using docker and how to run it? - docker

i need to install SonarQube using my docker.
i tried this below code to install
`FROM ubuntu:14.04
RUN apt-get update
RUN apt-get -y install unzip curl openjdk-7-jre-headless
RUN cd /tmp && curl -L -O
https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-7.0.zip
RUN unzip /tmp/sonarqube-7.0.zip
EXPOSE 9000
CMD ["chmod +x","/tmp/sonarqube-7.0/bin/linux-x86-64/sonar.sh"]
CMD ["/sonarqube-7.0/bin/linux-x86-64/sonar.sh","start"]`
its build is successful.
MY QUESTION IS:
1.how can i run it on server?
I tried "docker run -d --name image -p 9000:9000 -p 9092:9092 sonarqube"
but its not connecting..can anyone help me from here or do i need to change in script??

Try below steps.
Modify the Dockerfile last line to:
RUN echo "/sonarqube-7.0/bin/linux-x86-64/sonar.sh start" >> .bashrc
Rebuild the image
Start a container:
docker run -d --name image -p 9000:9000 -p 9092:9092 sonarqube /bin/bash

Related

port mapping -p 8080:8080 vs --net=host

Dockerfile
FROM ubuntu:20.04
# Setup
RUN apt-get update && apt-get install -y unzip xz-utils git openssh-client curl python3 && apt-get upgrade -y && rm -rf /var/cache/apt
# Install Flutter
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
RUN flutter channel master
RUN flutter upgrade
RUN flutter config --enable-web
RUN flutter doctor -v
# Copy files to container and get dependencies
COPY . /usr/local/bin/app
WORKDIR /usr/local/bin/app
RUN flutter pub get
RUN flutter build web
# Document the exposed port and start server
EXPOSE 8080
RUN chmod +x /usr/local/bin/app/server/server.sh
ENTRYPOINT [ "/usr/local/bin/app/server/server.sh" ]
Entrypoint server.sh file
#!/bin/bash
cd build/web/
python3 -m http.server 8080
I build an image - docker build --network=host --tag image1 .
Then I try to run it:
docker run -d -p 8080:8080 image1 -- doesnt work. no error but just doesnt load
docker run -d image1 -- doesnt work. no error but just doesnt load
docker run -d --net=host image1 -- works !!
Why does -p 8080:8080 not work whereas --net=host work ?
How are you trying to access your app? At port 8000 or 8080? Your title and the command you posted doesn't seem to match. Are you trying to map 8080 on your machine to 8080 in the app? If so, you have a typo in your command. Your command is mapping 8000 to 8080 and I'm guessing you're then trying to access it at localhost:8080 and encountering nothing.
I think it should just be docker run -d -p 8080:8080 image and then you should be able access it at localhost:8080 just fine.

Docker Container Not Starting for create docker file

AM completely new to Docker, now am trying to create a container for tomact from ubuntu base image & written a docker file acoding to it:
From ubuntu
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install wget -y
RUN apt-get install openjdk-8-jdk -y
RUN mkdir /usr/local/tomcat
RUN wget https://mirrors.estointernet.in/apache/tomcat/tomcat-8/v8.5.61/bin/apache-tomcat-8.5.61.tar.gz
RUN tar xvzf apache-tomcat-8.5.61.tar.gz
RUN mv apache-tomcat-8.5.61 /usr/local/tomcat/
#MD ./usr/local/tomcat/apache-tomcat-8.5.61/bin/catlina.sh run
EXPOSE 8080
RUN /usr/local/tomcat/apache-tomcat-8.5.61/bin/catlina.sh run
Created Docker image for the respective docker file using:
docker build -t [filename] .
Tried to start the container using: docker run -itd --name my-con -p 8080:8080
but the container is not starting & the container is listed in stopped container
Cn any one help me fixing this issue
Thanks.
try this in last line:
CMD ["/usr/local/tomcat/bin/catalina.sh","run"]

How to exchange files between docker container and local filesystem?

I have a TypeScript code that reads the contents of a directory and has to delete them one by one at some intervals.
Everything works fine locally. I made a docker container for my code and wanted to achieve the same purpose, however, I realized that the directory contents are the same ones existed at the time of building the container.
As for my understanding, the connection between the docker container and the local file system is missing.
I have been wandering around bind and volume options, and I came across the following simple tutorial:
How To Share Data Between the Docker Container and the Host
According to the previous tutorial, theoretically, I would be able to achieve my goal:
If you make any changes to the ~/nginxlogs folder, you’ll be able to see them from inside the Docker container in real-time as well.
However, I followed exactly the same steps but still couldn't see the changes made locally reflected in the docker container, or vice versa.
My question is: How can I access my local file system from a docker container to read/write/delete files?
Update
This is my dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
Easy way to volume mount on docker run command
docker run -it -v /<Source Dir>/:/<Destination Dir> <container_name> bash
Another way is using docker-compose.
Let's try it with docker-compose
put your dockerfile and docker-compose at the same location or dir
main focus
volumes:
- E:\dirToMap:/work
docker-compose.yaml
version: "3"
services:
ampervue:
build:
context: ./
image: <Image Name>
container_name: ampervueservice
volumes:
- E:\dirToMap:/vol1
ports:
- 8080:8080
And add volume in dockerfile
FROM ampervue/ffmpeg
RUN curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
RUN apt-get update -qq && apt-get install -y --force-yes \
nodejs; \
apt-get clean
RUN npm install -g fluent-ffmpeg
RUN rm -rf /usr/local/src
RUN apt-get autoremove -y; apt-get clean -y
WORKDIR /work
VOLUME /vol1
COPY package.json .
COPY . .
CMD ["node", "sizeCalculator.js"]
and run following command to up the container
docker-compose -f "docker-compose-sr.yml" up -d --build
At the examples below which come directly from the docs:
The --mount and -v examples below produce the same result. You can't run them both unless you remove the devtest container after running the first one.
with -v:
docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest
with --mount:
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app nginx:latest
This is where you have to type your 2 different paths:
-v /path/from/your/host:/path/inside/the/container
<-------host------->:<--------container------->
--mount type=bind,source=/path/from/your/host,target=/path/inside/the/container
<-------host-------> <--------container------->

Docker container exited instantly with code (127)

In the log file I have this error:
./worker: error while loading shared libraries: libcares.so.2: cannot open shared object file: No such file or directory
I tried everything with the library it exists and its linked to the path.
My Dockerfile :
FROM ubuntu:20.04
RUN apt update -y && apt install libssl-dev -y
WORKDIR /worker
COPY build/worker ./
COPY build/lib /usr/lib
EXPOSE 50051
CMD ./worker
My makefile:
all: clean build
build:
mkdir -p build/lib && \
cd build && cmake .. && make
clean:
rm -rf build
clean-containers :
docker container stop `docker container ls -aq`
docker container rm `docker container ls -a -q`
create-workers :
docker run --name worker1 -p 2001:50051 -d workerimage
docker run --name worker2 -p 2002:50051 -d workerimage
docker run --name worker3 -p 2003:50051 -d workerimage
docker run --name worker4 -p 2004:50051 -d workerimage
docker run --name worker5 -p 2005:50051 -d workerimage
docker run --name worker6 -p 2006:50051 -d workerimage
docker run --name worker7 -p 2007:50051 -d workerimage
docker run --name worker8 -p 2008:50051 -d workerimage
docker run --name worker9 -p 2009:50051 -d workerimage
docker run --name worker10 -p 2010:50051 -d workerimage
make sure libcares.so.2 and other shared libraries are present inside /usr/lib of the container.

Docker images built locally fail while the same image from docker hub works

I am running windows 10, using docker for windows.
Here's the baseline:
docker pull nshou/elasticsearch-kibana:kibana3
docker image list
docker run -d -p 9200:9200 -p 5601:5601 {imageName}:kibana3
curl localhost:9200/_stats
Good response.
So I copied the Dockerfile from https://bitbucket.org/nshou/elasticsearch-kibana/src/kibana3/Dockerfile
FROM ubuntu:latest
RUN apt-get update -q
RUN apt-get install -yq wget default-jre-headless mini-httpd
ENV ES_VERSION 1.7.4
RUN cd /tmp && \
wget -nv https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-${ES_VERSION}.tar.gz && \
tar zxf elasticsearch-${ES_VERSION}.tar.gz && \
rm -f elasticsearch-${ES_VERSION}.tar.gz && \
mv /tmp/elasticsearch-${ES_VERSION} /elasticsearch
ENV KIBANA_VERSION 3.1.3
RUN cd /tmp && \
wget -nv https://download.elastic.co/kibana/kibana/kibana-${KIBANA_VERSION}.tar.gz && \
tar zxf kibana-${KIBANA_VERSION}.tar.gz && \
rm -f kibana-${KIBANA_VERSION}.tar.gz && \
mv /tmp/kibana-${KIBANA_VERSION} /kibana
CMD /elasticsearch/bin/elasticsearch -Des.http.cors.enabled=true -Des.logger.level=OFF & mini_httpd -d /kibana -h `hostname` -r -D -p 5601
EXPOSE 9200 5601
and I build it with
docker build -t test/test .
Image builds successfully.
docker image list
docker run -d -p 9200:9200 -p 5601:5601 {imageName}:latest
curl localhost:9200/_stats
No response. Not a 404, but the server responds with a no response.
The problem seems to be that when I build the image myself it doesn't work. When I pull the same dockerfile image from the hub, it works.
Why and how do I fix it?
Figured it out.
When the locally built container is running, its actually crashing with this error
Unrecognized VM option 'UseParNewGC' , Error: Could not create the Java Virtual Machine
The default-jre-headless is using a version of Java that is incompatible with this older version of Elasticsearch.
Switching to openjdk-8-jre-headless solves the issue.
I guess the image on nshou is cached and so old that it's using an older version of the jre? I'm not sure why the baseline image would work when the latest default-jre-headless has this issue with the kibana3 tag of the repo.
Thankfully my problem is resolved.

Resources