I have the installation done on my Mac. What local port does Caravel run on ?
It has got no default port, instead we need to provide the port number when we start the Superset service.
superset runserver -p Port_Number
And it should get going..
Default port is 8088.
# Start the web server on port 8088, use -p to bind to another port
superset runserver
This is directly copied from The Superset installation guide.
Related
I am currently facing following problem:
I build a docker container of a node server (a simple express server which sends tracing data to Zipkin on port 9411) and want to run it along Zipkin.
So as I understood, the node server should send tracing data to Zipkin using port 9411.
If I run the server with node only (not as docker), I can run it along Zipkin and everything is working fine.
But if I got Zipkin running and than want to fire up my Docker Container, I get the error
Error starting userland proxy: listen tcp4 0.0.0.0:9411: bind: address already in use.
My understanding is that there is a conflict concerning the port 9411, since it seems to be blocked by Zipkin, but obviously, also the server in the Docker container needs to use it to communicate with Zipkin.
I would appreciate if anybody got an idea how I could solve this problem.
Greetings,
Robert
When you start a docker container, you add a port binding like this:
docker run ... -p 8000:9000
where 8000 is the port you can use on the pc to access port 9000 within the container.
Don't bind the express server to 9411 as zipkin is already using that port.
I found the solution: using the flag --network="host" does the job, -p also is not needed.
I have the feeling that I am overlooking something obvious as my solutions/ideas so far seem too cumbersome. I have searched intensively for a good solution, but so far without success - probably because I do not know what to look for.
Question:
How do you interact with the graphical interfaces of web servers running in different containers (within the same Docker Network) on a remote server, given URL redirections between these containers?
Initial situation:
I have two containers (a Flask web application and a Tomcat server with OpenAM running on it) running on my docker host (Azure-VM).
On the VM I can output the content of both containers via the ports that I have opened.
Using ssh port forwarding I can interact with the graphical components of both containers on my local machine.
Both containers were created with the same docker-compose and can be accessed via their domain name without additional network settings.
So far I have configured OpenAM on my local machine using ssh port forwarding.
Problem:
The Flask web app references OpenAM by its domain name defined in docker-compose and vice versa. I forward to my local machine the port of the Flask container. The Flask application is running and I can interact with it in my browser.
The system fails as soon as I am redirected from Flask to OpenAM on my local machine because the reference to the OpenAM container used by Flask is specific to the Docker network. Also, the port of the OpenAM Container is different.
In other words, the routing between the two networks is nonexistent.
Solutions Ideas:
Execute the requests on the VM using command-line tools.
Use a container with a headless browser that automatically executes the requests.
Use Network Setting 'Host' and execute the headless browser on the VM instead.
Route all requests through a single container (similar to a VPN) and use ssh port forwarding.
Simplified docker-compose:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000
Route all requests through a single container - This is the correct approach.
See API gateway pattern
The best solution that I could find so far. It does not serve for production. However, for prototyping or if simply trying to emulate a server structure by using containers it is an easy setup.
General Idea:
Deploy a third VNC container running a Webbrowser and forward the port of this third container to your local machine. As the third container is part of the docker network it can naturally resolve the internal domain names and the VNC installation on your local machine enables you to interact with the GUIs.
Approach
Add the VNC to the docker-compose of the original question.
Enable X11 forwarding on the server and client-side.
Forward the port of the VNC container using ssh.
Install VNC on the client, start a new session, and enter the predefined password.
Try it out.
Step by Step
Add the VNC container (inspired by creack's post on stackoverflow) to the docker-compose file from the original question:
version: "3.4"
services:
openam:
image: openidentityplatform/openam
ports:
- 5001:8080
command: /usr/local/tomcat/bin/catalina.sh run
flask:
build: ./SimpleHTTPServer
ports:
- 5002:8000
command: python -m http.server 8000
firefoxVnc:
container_name: firefoxVnc
image: creack/firefox-vnc
ports:
- 5900:5900
environment:
- HOME=/
command: x11vnc -forever -usepw -create
Run the docker-compose: docker-compose up
Enable X11 forwarding on the server and client-side.
On client side $ vim ~/.ssh/config and add the following lines:
Host *
ForwardAgent yes
ForwardX11 yes
On server-side run $ vim /etc/ssh/sshd_config and edit the following lines:
X11Forwarding yes
X11DisplayOffset 10
Forward the port of the VNC container using ssh
ssh -v -X -L 5900:localhost:5900 gw.example.com
Make sure to include the -X flag for X11. The -v flag is just for debugging.
Install VNC on the client, start a new session and enter the predefined password.
Install VNC viewer on your local machine
Open the installed viewer and start a new session using the forwarded address localhost:59000
When prompted type in the password 1234 which was set in the original Dockerfile of the VNC dicker image (see creack's post linked above).
You can now either go to openam:8080/openam/ or apache:80 within the browser of the VNC localhost:5900 session.
An even better solution that is clean, straightforward, and also works perfectly when running parts of the application on different virtual machines.
Setup and Use an SSH SOCKS Tunnel
For Google Chrome and macOS:
Set your network settings to host within the Dockerfile or docker-compose.
Start an SSH tunnel:
$ ssh -N -D 9090 [USER]#[SERVER_IP]
Add the SwitchyOmega proxy addon to your Chrome browser.
Configure SwitchyOmega by going to New Profile > Proxy Profile, clicking create, entering the same server IP as for the ssh command and the port 9090.
Open a new terminal tap and run:
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" \
--user-data-dir="$HOME/proxy-profile" \
--proxy-server="socks5://localhost:9090"
A new Crome session will open up in which you can simply browse your docker applications.
Reference | When running Linux or Windows | Using Firefox (no addon needed)
The guide How to Set up SSH SOCKS Tunnel for Private Browsing explains how to set up an SSH SOCKS Tunnel running Mac, Windows, or Linux and using Google Chrome or Firefox. I simply referenced the setup for macOS and Crome in case the link should die.
I am currently working on a project where I am attempting to use MinIO with a data moving program developed by my company. This broker software only allows for devices using port 80 to successfully complete a job; however, any avid user of MinIO knows that MinIO hosts on port 9000. So my question is, is there a way to change the port on which the MinIO server is hosted? I've tried looking through the config.json file to find an address variable to assign a port number to but each of the address variables I attempted to change had no effect on the endpoint port number. For reference, I am hosting MinIO on a windows 10 virtual machine during the test phase of the project and will be moving it onto a dedicated server (also windows 10) upon successful completion of testing.
Add --address :80 when you start your minio.
You can refer to this: https://docs.min.io/docs/multi-tenant-minio-deployment-guide.html
When you start the minio server use the following command…
minio server start --address :[port you want to use]
for example…
minio server start --address :8000
Here is a little of backstory. I implemented a couple of web APIs using microservices architecture. I am trying to make my microservices accessible via HTTPS. The microservices are developed using .net core, so according to Microsoft document, to enforce HTTPS, I need to configure Kestrel. Following is how I did it.
.UseKestrel(options =>
{
options.Listen(IPAddress.Loopback, 5000);
options.Listen(IPAddress.Loopback, 5001, listenOptions =>
{
listenOptions.UseHttps("cert.pfx", "pwd");
});
})
To make it simple, I use kestrel by itself and skip reverse proxy. I will certainly include Nginx as reverse proxy but that is the future work. I tested locally, it worked. Then, I deployed it onto Docker. Here is the docker-compose.override file
version: '3.4'
services:
dataservice:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
ports:
- "5000:80"
- "5001:443"
In dockerfile, port 5000 and 5001 are exposed. I built the project into images, and run it on docker, using docker run -it --rm -p 5000:80 --name *name* *imagename*. Docker shows Now listening on: http://127.0.0.1:5000 and Now listening on: https://127.0.0.1:5001. Now the problem is, leave the https part aside, the APIs cannot even accessed by http. The browser just shows This page isn’t working 127.0.0.1 didn’t send any data. ERR_EMPTY_RESPONSE. I found a similar question from here Docker: cannot open port from container to host
, somehow this is about server should listen to 0.0.0.0. Though I am not fully understand the reason, I changed the kestrel configuration to
options.Listen(IPAddress.Any, 5000);
built and ran docker images again, and Docker shows Now listening on: http://0.0.0.0:5000, still it doesn't work. I also tried to replace the IP with localhost, it has no use. I did not use .UseHttpsRedirection(), https should have nothing to do with the problem.
Am I missing any configuration or doing anything wrong? It would be really helpful if anyone could shed some light. Thank you in advance.
You should listen on 80 and 443 inside the container, i.e. options.Listen(IPAddress.Any, 80); because this docker declaration
ports:
- "5000:80"
means that the local port 80 (the port from your source code) is exported to external port 5000, and not the other way around.
I have exposed port 80 in my application container's dockerfile.yml as well as mapping "80:80" in my docker-compose.yml but I only get a "Connection refused" after I do a "docker-compose up" and try to do a HTTP GET on port 80 on my docker-machine's IP address. My docker hub provided RethinkDB instance's admin panel gets mapped just fine through that same dockerfile.yml ("EXPOSE 8080") and docker-compose.yml (ports "8080:8080") and when I start the application on my local development machine port 80 gets exposed as expected.
What could be going wrong here? I would be very grateful for a quick insight from anyone with more docker experience!
So in my case, my service containers both bound to localhost (127.0.0.1) and therefore seemingly the exposed ports were never picked up via my docker-compose port mapping. I configured my services to bind to 0.0.0.0 respectively and now they works flawlessly. Thank you #creack for pointing me in the right direction.
In my case I was using
docker-compose run app
Apparently
docker-compose run command does not create any of the ports specified in the service configuration.
See https://docs.docker.com/compose/reference/run/
I started using
docker-compose create app
docker-compose start app
and problem solved.
In my case I found that the service I am trying to set up had all their networks as internal: true. It is strange that it didn't give me an issue when doing a docker stack deploy
I have opened up https://github.com/docker/compose/issues/6534 to ask for a proper error message so it will be obvious for other people.
If you are using the same Dockerfile, make sure you also expose the port 80 EXPOSE 80 otherwise, your compose mapping 80:80 will not work.
Also make sure that your http server listens on 0.0.0.0:80 and not localhost or a different port.