Changing Grafana port - influxdb

I currently have InfluxDB feeding dashboards in Grafana. I will eventually be deploying this stack on a server.
However, the default port for Grafana is 80. I must change this port, but I don't know how. Can anyone help out?
Thanks.

Not only change in /etc/grafana/grafana.ini you have to change in
/usr/share/grafana/conf/defaults.ini and /usr/share/grafana/conf/sample.ini files. Just search 3000 port(which is default port for grafana) in these three files and replace it with your preferred port.

Here's the easiest way I found.
docker run -d \
-p 2345:2345 \
--name grafana \
-e "GF_SERVER_HTTP_PORT=2345" \
grafana/grafana
See the documentation here.
https://grafana.com/docs/grafana/latest/installation/docker/#configuration

Since Grafana 2.0:
Grafana now ships with its own backend server
You can edit /etc/grafana/grafana.ini (usual location) and change the running port:
[server]
http_port=1234
Source:
http://docs.grafana.org/installation/configuration/

If you are using Linux, you can change the default port by changing the port from /etc/grafana/grafana.ini. There is no separate custom.ini for Linux. For Windows, MacOS or any other platform, check the official documentation.
For opening grafana.ini, you would need sudo privileges. For changing the port please follow the steps below.
Execute sudo gedit /etc/grafana/grafana.ini in a new Terminal window.
Search for 3000 in the `.ini. file and you will find a line similar to the one shown below.
# The http port to use
;http_port = 3000
Remove the semicolon (;) and change the port to the port that you wish to run the grafana server on.
Save the file and close gedit.
You will need to restart the Grafana server for the changes to take place. Run sudo systemctl restart grafana-server.
The grafana server should be started on the port that you provided. Please note that you will have to write systemctl or service depending upon your init system. To determine your init system, run ps --no-headers -o comm 1.
Source

For those using Docker:
Create a grafana.ini:
[server]
http_port = 1234
Update your Dockerfile:
FROM grafana/grafana
EXPOSE 1234
ADD grafana.ini /etc/grafana
Build and run the container:
docker build grafana
docker run \
-d \
-p 1234:1234 \
--name grafana \
grafana/grafana
The EXPOSE is technically optional but is good practice for documentation.

For Linux, I grab the setup file form here
https://grafana.com/grafana/download?platform=linux
Then install it!
You only need to change this one /usr/share/grafana/conf/defaults.ini:
Replace:
http_port = 3000
With
http_port = YourPortYouWant
Then restart your app:
sudo service grafana-server stop
sudo service grafana-server start
To verify you should run:
sudo service grafana-server status
Then you can see the app lives in your desired port:
Open up localhost:yourport to see the result.
I think the document from Grafana should be updated.

On windows,
Change port from 3000 to 3001 in "C:\Program Files\GrafanaLabs\grafana\conf\defaults.ini"
Restart Grafana service from windows services

I know its old thread but for me in Mac i had to make changes at 2 places.
I installed through Brew
/usr/local/etc/grafana/grafana.ini
/usr/local/Cellar/grafana/8.1.5/share/grafana/conf/defaults.ini

Grafana just runs behind a standard web server, like apache. If you are using apache, just update your virtual hosts file to use whatever port you want, and restart apache. Grafana will then be on the new port.

For Windows 10 and Grafana v7.1.1, the following steps made the Grafana to be served in different port:
Navigate to the Grafana "conf" folder location like "C:\Program Files\GrafanaLabs\grafana\conf"
Copy the file "sample.ini" in the same location
Rename the copied sample.ini to "custom.ini"
Edit the "custom.ini" by opening in any editor.The editor must be running as Administrator.
Uncomment the ";http_port = 3000" line by removing the semicolon(;). Note: Semicolon(;) is used to comment out lines in .ini files
Change the port "3000" to whatever port is required. Make sure the new port should be admin rights. I changed to port "3001".
Save the file.
Restart the Windows machine.
The Grafana url is now hosted in "http://localhost:3001/?orgId=1"

You have to remove (;), like this:
http_port = 3900

Related

Docker - Cant access docker port 8080 even if is exposed. Works only with --network host

I'm trying to run a visual studio server and create a dockerfile. If you want to reproduce the script clone https://github.com/alessandriLuca/4Stackoverflow .
script.sh will build the docker container and run it sharing the port. The problem is that apparently i cant reach the port 8080 even if i exposed it. I solved on ubuntu with --network host but this option is not accessible for OsX or Windows.
Here is the last part of the dockerfile, that is related to visualStudio installation
COPY visualStudio /visualStudio
RUN cd /visualStudio/ && 7za -y x "*.7z*"
RUN dpkg -i /visualStudio/visualStudio/*.deb
COPY config.yaml ~/.config/code-server/config.yaml
EXPOSE 8080
CMD ["code-server","--auth","none"]
As you can see i use a config.yaml but that one is also not working since when i run the code-server that file is overwritten so, the port still remain 8080.
Thank you for any help
EDIT
You can find all files, included config.yaml here https://github.com/alessandriLuca/4Stackoverflow/tree/main/merged2_visualStudio
EDIT
I kind of solved it! Practically as you said was hosting on 127.0.0.1 instead of 0.0.0.0, soo i changed manually in config.yaml and now is working. The only problem now is to add this configuration directly in the dockerfile since, when I run the server he overwrite the config.yaml that I created. Does someone have any idea about this part?

Docker Port Not Exposing Golang

I am building a golang WebService in docker. The build seems fine but I am unable to expose the port for external (outside of container) access. When I curl from the command line (inside the container) the app appears to work fine.
I saw quite a few posts of similar problems but unfortunately many were not resolved or didn't seem applicable.
FROM golang:alpine
RUN mkdir /go/src/webservice_refArch
ADD . /go/src/webservice_refArch
WORKDIR /go/src/webservice_refArch
RUN apk add curl
RUN cd /go/src/webservice_refArch/ && go get ./...
RUN cd /go/src/webservice_refArch/cmd/reference-w-s-server && go build -o ../../server
EXPOSE 7878
ENTRYPOINT ["./server", "--port=7878"]
I have tried both:
:7878
localhost:7878
I was facing the same issue. Then what I did is change the ListernHost from localhost to 0.0.0.0 and it worked.
To debug this tried curl inside the container it was working fine but outside the container the response of the curl was blank. The port mapped but the content was not served outside the container. Once you change the "localhost" to 0.0.0.0 it will work.
See https://docs.docker.com/engine/reference/run/#expose-incoming-ports, just expose port in dockerfile is not enough.
You can add -p 7878:7878 when start container, or use -P to let docker set a automatical host port mapping for you.
If you do not want to do above, you can also add --net=host when start the container, then container will use host's network, if also works for you.
if you are trying to access the port inside your docker container from your local machine, you need map it to the desired port on your local machine
docker run -p 7878:7878 IMAGE
Then you should be able to access it on your host

.Net Core WebApi refuses connection in Docker container

I am trying out Docker with a small WebApi which I have written in dotnet core.
The Api seems to work fine because when I run it with dotnet run it starts normally and is reachable on port 5000. But when I run it in a Docker container it starts, but I cannot reach it on the exposed/mapped port. I'm running Docker on Windows 10 withing VirtualBox.
My Dockerfile looks like this:
FROM microsoft/aspnetcore-build:latest
COPY . /app
WORKDIR /app
RUN dotnet restore
EXPOSE 5000
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "run"]
I am building the dontainer like this:
docker build -t api-test:v0 .
And run it with this command:
docker run -p 5000:5000 api-test:v0
The output of the run command is:
Hosting environment: Production
Content root path: /app
Now listening on: http://localhost:5000
I have also tried different approaches of binding the URL:
as http://+:5000, http://0.0.0.0:5000, http://localhost:5000, ...
via CLI parameters --urls / --server.urls
but without success. Does anyone see what I'm doing wrong or missing?
Now listening on: http://localhost:5000
Binding to localhost will not work for your scenario. You need to get the app to bind to 0.0.0.0 for the docker port forwarding to work. Once you do that, you should be able to reach the app on the VM IP, port 5000
Make sure your service is listening on all ports using http://*:500 or similar (if it prints localhost when running, it won't work).
If you set up your docker environment with VirtualBox and used e.g. docker-machine, you n need to use the IP address of the virtual machine that runs the docker containers. you can get the IP via docker-machine ip default.
I found a way round this. All you need to do is edit launchSettings.json change to "applicationUrl": "http://*:5000/" of your app setting. Build the image. Then run the image docker run -d -p 81:5000 aspnetcoreapp after it runs get ip address of the container docker exec container_id ipconfig. Then in browser http://container_ip:5000/api/values. For some reason it does not work http://localhost:81 still need to figure out why that is.
I had the same issue recently with Docker version 20.x. The comment above provided good lights. If anyone faces the same issue here is how I've solved: Edit launchSettings.json to
"applicationUrl": "http://localhost:5001;http://host.docker.internal:5001",
This will let you test your web API locally and also to be consumed from the container.

Deploying new versions of an image instantly

I would like to have 3 versions of my container running at any one time (on the same machine). Something like this:
version v7 (stage)
version v6 (live)
version v5 (old)
then I would like to map this to 3 urls:
v7.example.com
v6.example.com
v5.example.com
And also, a 4th url, which refers to the current (or default) version:
www.example.com (which maps to http://v6.mydomain.com)
Presumably, I could take some configuration step that would change the "default" version from v6 to v7. That step should hopefully be instant and atomic.
The idea is that deploying the next version of an app is a distinct step from activating that version (by activate, I mean making that version the default).
Therefore a rollout (or a rollback) would simply be a matter of changing the default version to the next (or previous) version.
Google App Engine supports this kind of pattern and I really like it.
Has anyone set something like this up using Docker? I would appreciate any advice on how to do it. Thanks.
I would do this with a reverse proxy in front of the containers running your webapp.
Example using the jwilder/nginx-proxy image
Let's say your docker host IP address is 11.22.33.44.
Let's say your docker images are:
mywebapp:5 for v5
mywebapp:6 for v6
mywebapp:7 for v7
First, make sure your DNS is set up so that v5.example.com, v6.example.com, v7.example.com and www.example.com all resolve to 11.22.33.44.
Start a jwilder/nginx-proxy on your docker host:
docker run -d --name reverseproxy -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro -e DEFAULT_HOST=www.example.com jwilder/nginx-proxy
Set v6 as the default one
Start the webapps containers:
docker run -d -name webapp5 -e VIRTUAL_HOST="v5.example.com" mywebapp:5
docker run -d -name webapp6 -e VIRTUAL_HOST="v6.example.com,www.example.com" mywebapp:6
docker run -d -name webapp7 -e VIRTUAL_HOST="v7.example.com" mywebapp:7
The jwilder/nginx-proxy will use the value of the VIRTUAL_HOST environment variable to update its configuration and route the requests to the correct container.
How to make v7 the new default one
First, remove container webapp7 and create a new one with www.example.com added to the VIRTUAL_HOST variable:
docker rm webapp7
docker run -d -name webapp7 -e VIRTUAL_HOST="v7.example.com,www.example.com" mywebapp:7
In this state, the reverse proxy will load balance queries for www.example.com to both webapp6 and webapp7 containers.
Finally, remove container webapp6 and eventually recreate it, but without www.example.com in the VIRTUAL_HOST value:
docker rm webapp6
docker run -d -name webapp6 -e VIRTUAL_HOST="v6.example.com" mywebapp:7
I thought I would share what I ended up doing. I took Thomasleveil's advice to use nginx. But rather than starting and stopping a whole docker container and nginx just to switch versions, I do this:
Change the port number in the nginx config file (see file below)
Call service nginx reload (which is instant).
server{
location / {
proxy_pass http://192.168.1.50:81/;
}
}

Installing PostgreSQL within a docker container

I've been following several different tutorials as well as the official one however whenever I try to install PostgreSQL within a container I get the following message afterwards
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've looked through several questions here on SO and throughout the internet but no luck.
The problem is that the your application/project is trying to access the postgres socket file in the HOST machine (not docker container).
To solve it one would either have to explicitly ask for an tcp/ip connection while using the -p flag to set up a port for the postgres container, or share the unix socket with the HOST maching using the -v flag.
:NOTE:
Using the -v or --volume= flag means you are sharing some space between the HOST machine and the docker container. That means that if you have postgres installed on your host machine and its running you will probably run into issues.
Below I demonstrate how to run a postgres container that is both accessible from tcp/ip and unix socket. Also I am naming the container as postgres.
docker run -p 5432:5432 -v /var/run/postgresql:/var/run/postgresql -d --name postgres postgres
There are other solutions, but I find this one the most suitable. Finally if the application/project that needs access is also a container, it is better to just link them.
By default psql is trying to connect to server using UNIX socket. That's why we see /var/run/postgresql/.s.PGSQL.5432- a location of UNIX-socket descriptor.
If you run postgresql-server in docker with port binding so you have to tell psql to use TCP-socket. Just add host param (--host or -h):
psql -h localhost [any other params]
UPD. Or share UNIX socket descriptor with host (where psql will be started) as was shown in main answer. But I prefer to use TCP socket as easy managed approach.
FROM postgres:9.6
RUN apt-get update && apt-get install -q -y postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 postgresql-client-common postgresql-common
RUN echo postgres:postgres | chpasswd
RUN pg_createcluster 9.6 main --start
RUN /etc/init.d/postgresql start
RUN su -c "psql -c \"ALTER USER postgres PASSWORD 'postgres';\"" postgres
Here are instructions for fixing that error that should also work for your docker container: PostgreSQL error 'Could not connect to server: No such file or directory'
If that doesn't work for any reason, there are many of off-the-shelf postgresql docker containers you can look at for reference on the Docker Index: https://index.docker.io/search?q=postgresql
Many of the containers are built from trusted repos on github. So if you find one that seems like it meets your needs, you can review the source.
The Flynn project has also included a postgresql appliance that might be worth checking out: https://github.com/flynn/flynn-postgres
Run the below command to create a new container with PSQL running it it, which can be accessed from other containers/applications.
docker run --name postgresql-container -p 5432:5432 -e POSTGRES_PASSWORD=somePassword -d postgres
Now, export the connection-string or DB credentials from ur .env and use it in the application.
Refernce: detailed installion and running

Resources