Firebird Classic server on Ubuntu 13.10 listen to remote connections - firebird2.5

I've installed Firebird Classic on Ubuntu (13.10) and I need to open it to remote connections. netstat -an on port 3050 shows this:
tcp 0 0 127.0.0.1:3050 0.0.0.0:* LISTEN
I tried editing /etc/xinet.d/firebird25 to listen to all on eth0, I tried to edit firebird.conf to Bind to all interfaces. But still I can't connect via remote on that port. Firewall is disabled.

I answered this in
https://askubuntu.com/questions/373090/ubuntu-server-13-10-and-firebird-2-5
From a fresh install:
sudo su
apt-get install xinetd
apt-get install python-software-properties
add-apt-repository ppa:mapopa
apt-get update
apt-get install firebird2.5-classic
netstat -an | grep 3050 #shows the problem: not binding to 0.0.0.0
nano /etc/firebird/2.5/firebird.conf
#comment out all RemoteBindAddress = XXXX
nano /etc/xinetd.d/firebird25
#set bind = 0.0.0.0
/etc/init.d/xinetd restart
dpkg-reconfigure firebird2.5-classic
netstat -an | grep 3050 #shows the fixed as binding to 0.0.0.0

Related

#Port 8983 is already being used by another process#

I was able to run SOLR 7.x in my Apple M1 chip Mac without any issues.
We recently moved from SOLR 7.x to SOLR 8.x
Now the below command throws this error consistently,
#Command: #
**docker run --name solr bitnami/solr:latest**
#Error:#
Port 8983 is already being used by another process
Info for you:
-----------
I create Solr docker image based out of docker.io/bitnami/solr:8.11.1
#Below commands gives empty output in mac terminal,#
lsof -i tcp:8983
lsof -i :8983
#Command: #
telnet localhost 8983
#Output:#
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
Docker file:
------------
FROM docker.io/bitnami/solr:8.11.1
.
.
.
RUN mkdir -p $SOLR_HOME/home/primary/conf
RUN mkdir -p $SOLR_HOME/home/reindex/conf
RUN mkdir -p $SOLR_HOME/home/primary/data
RUN mkdir -p $SOLR_HOME/home/reindex/data
COPY ./primary/core.properties $SOLR_HOME/home/primary/
COPY ./primary/custom.properties $SOLR_HOME/home/primary/
COPY ./conf/* $SOLR_HOME/home/primary/conf/
COPY ./reindex/core.properties $SOLR_HOME/home/reindex/
COPY ./reindex/custom.properties $SOLR_HOME/home/reindex/
COPY ./conf/* $SOLR_HOME/home/reindex/conf/
USER root
RUN apt-get update && apt-get upgrade -y && rm -rf /var/lib/apt/lists/*
.
.
.
EXPOSE 8983
This Apple M1 chip, Docker, SOLR 8.11.1 combination error is very strange. Any help is greatly appreciated.

Docker, Why Sudo Doesn't Work as Expected?

Note: Everyone is pointing to other problems and yet ignoring my main problem...
My docker file looks like this:
# BUILD: docker build -t default_credentials ./
# RUN: docker run -u root -t -i default_credentials
FROM ubuntu:latest
FROM python:latest
RUN apt-get update -y && apt-get upgrade -y
ADD default_credentials ./default_credentials
RUN pip3 install -r ./default_credentials/requirements.txt
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
RUN apt-get update && apt-get -y install google-chrome-stable
RUN apt-get update && apt-get -y install nmap
RUN apt-get -y install sudo
CMD /bin/bash
When it starts running I type:
sudo nmap -PE -sn 10.0.0.2
But the output doesn't contain the device mac address which means nmap didn't run with high privileges, how can I fix this?
Please Note: Only if I run nmap on my PC as sudo it shows the mac address, else it won't report the mac address.
But the output doesn't contain the device mac address which means nmap didn't run with high privileges, how can I fix this?
This assumption is not well-founded. There are multiple reasons nmap can not have access to ethernet addresses; not being run with root privileges is only one of them, and fixing the other problems is not sudo's job.
By default, Docker containers do not have CAP_SYS_ADMIN, and their root users may be mapped to a non-root user on the kernel. This makes root inside a container less privileged than root outside a container -- no matter whether or not you're using sudo -- unless you run the container in privileged mode.
Having access to MAC addresses requires bridged rather than routed networking (as when going through a router, all ethernet frames have the address of the router). Docker networking is routed through an internal NAT layer by default. https://docs.docker.com/network/bridge/ describes how to set up bridged networking in Docker; for what you're doing to work, you need to have a physical ethernet device attached to the same bridge as your Docker container.
So sudo doesn't work as you expect because it's not supposed to fix every possible problem that could stop nmap from being able to see MAC addresses; it only fixes the problem of not running as root.

Google Compute Engine: Can't access running Docker container through http request

So I have this running container on GCE and all requests from the outside fail to connect.
if I do docker ps --all i get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c516555621f0 hydra-parser:hydra-parser "/bin/sh -c 'gunicor…" 11 hours ago Up 11 hours 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp nervous_neumann
And it seems that the ports are open too.
imarquezc#hydra-parser:~$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 557/sshd
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 32722/docker-proxy
tcp6 0 0 :::22 :::* LISTEN 557/sshd
tcp6 0 0 :::8000 :::* LISTEN 32729/docker-proxy
Aditionally, if I curl to localhost/0.0.0.0 it works ok
imarquezc#hydra-parser:~$ curl localhost:8000
<h1>EUD Extractor</h1>
Other info:
I already have enabled http and https traffic on google's console.
Also added the default-allow-http, http-server, https-server network tags
My Dockerfile looks like:
FROM ubuntu:focal
RUN apt-get -y update
RUN apt-get install poppler-utils python3 python3-pip -y
COPY requirements.txt /
RUN pip install --upgrade pip
RUN python3
RUN pip install -r requirements.txt
ADD . /
RUN python3 ./stanza_downloader.py
CMD gunicorn --bind 0.0.0.0:8000 main:app
and I run the container using:
docker run -p 8000:8000 hydra-parser:hydra-parser
What I am missing? Please help!
The tags you are using do not enable port 8000. They enable ports 80 and 443.
Create a VPC firewall ingress rule for port 8000.

Running a graphical app in a Docker container, on a remote server with ssh

I am trying to connect via ssh -X to a remote machine and from them run a docker with the graphical interface enabled.
I have found and tried to follow this tutorial:
https://blog.yadutaf.fr/2017/09/10/running-a-graphical-app-in-a-docker-container-on-a-remote-server/
but I am getting the following error even with the final script
xterm: Xt error: Can't open display: :10
if I add a sudo chmod -R 777 display in the script I get this error instead:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: :10
What I am missing? There is anything else I should configure either locally, on the remote machine or the container?
on local machine
$ xauth list
test/unix:1 MIT-MAGIC-COOKIE-1 f3e4afd36654c52965e3e529c8a3ce88
test/unix:0 MIT-MAGIC-COOKIE-1 a806b599d75e8b6df4f107a87faa952f
test/unix:10 MIT-MAGIC-COOKIE-1 85342ce3b19106adfea27f3333ba19d8
test/unix:11 MIT-MAGIC-COOKIE-1 224cda4faa7c6d128748c2c35387df68
test:10 MIT-MAGIC-COOKIE-1 5061d4d95dc44d11b60137ad215f1abe
test:11 MIT-MAGIC-COOKIE-1 1eddbb0db43952186265eca6e49e08a4
$ echo $DISPLAY
:0.0
on remote machine (ssh -X localhost)
$ xauth list
test/unix:1 MIT-MAGIC-COOKIE-1 f3e4afd36654c52965e3e529c8a3ce88
test/unix:0 MIT-MAGIC-COOKIE-1 a806b599d75e8b6df4f107a87faa952f
test/unix:10 MIT-MAGIC-COOKIE-1 85342ce3b19106adfea27f3333ba19d8
test/unix:11 MIT-MAGIC-COOKIE-1 224cda4faa7c6d128748c2c35387df68
test:10 MIT-MAGIC-COOKIE-1 5061d4d95dc44d11b60137ad215f1abe
test:11 MIT-MAGIC-COOKIE-1 1eddbb0db43952186265eca6e49e08a4
$ echo $DISPLAY
test:10.0
$ sudo lsof -i4:6010
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 25693 me 10u IPv4 575887 0t0 TCP *:6010 (LISTEN)
sshd 25693 me 13u IPv4 583043 0t0 TCP localhost:6010->localhost:33366 (ESTABLISHED)
socat 25735 me 3u IPv4 585843 0t0 TCP localhost:33366->localhost:6010 (ESTABLISHED)
within container
$ xauth list
xterm/unix:10 MIT-MAGIC-COOKIE-1 85342ce3b19106adfea27f3333ba19d8
$ echo $DISPLAY
:10
p.s. I also have tried without success the following SO solutions [1], [2]

How can I create a SSH tunnel in a docker container where the socks proxy is accessible by the host machine?

I want to use a docker container to create the ssh tunnel since there are issues compiling Obfuscated OpenSSH on Mac where as it is simple on Ubuntu.
Here is the docker file I'm using
FROM rastasheep/ubuntu-sshd:16.04
RUN apt-get update
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository ppa:zinglau/obfuscated-openssh
RUN apt-get update
RUN apt-get install -y openssh-server
RUN apt-get update
ADD ./Key.pem /var/www/
CMD ["ping", "google.com","-c 3"]
ENTRYPOINT ssh -z -Z obfuscatedkey -4 -i "/var/www/Key.pem" -N -p 53 -D 6969 ubuntu#REMOTE_SERVER_ON_AWS -v
EXPOSE 6969
The problem I'm getting is that it seems to connect but I can't connect to the SOCKS proxy on my host machine by using 127.0.0.1:6969 as the proxy.
I've tried running it as docker run -i -t NAME -p 127.0.0.1:6969:6969 and also docker run -i -t NAME -P
But the ssh tunnel freezes upon this step
debug1: Authentication succeeded (publickey).
Authenticated to REMOTE_SERVER_ON_AWS ([IP_ADDRESS]:53).
debug1: Local connections to LOCALHOST:6969 forwarded to remote address socks:0
debug1: Local forwarding listening on 127.0.0.1 port 6969.
debug1: channel 0: new [port listener]
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: pledge: network
debug1: client_input_global_request: rtype hostkeys-00#openssh.com want_reply 0
Any help is appreciated thanks!
The issue is that you are creating a localhost tunnel inside the container. To use that tunnel you need to be inside the docker container.
When you use -p 127.0.0.1:6969:6969 in docker run command. It says that port 6969 from the container will receive all traffic from your machine. But the container would receive the same from the IP assigned to the docker container. Which would be something like 172.2.0.2.
You ssh tunnel inside the container is only listening to 127.0.0.1 and not 172.2.0.2, so it will receive no such traffic. So change your Dockefile line
ENTRYPOINT ssh -z -Z obfuscatedkey -4 -i "/var/www/Key.pem" -N -p 53 -D 6969 ubuntu#REMOTE_SERVER_ON_AWS -v
to
ENTRYPOINT ssh -z -Z obfuscatedkey -4 -i "/var/www/Key.pem" -N -p 53 -D 0.0.0.0:6969 ubuntu#REMOTE_SERVER_ON_AWS -v
And if the -D option doesn't work the use -L option

Resources