\r in lua it's possible? - lua

How do \r in lua ?
This my code :
for port = 1, 65535 do
local sock = socket.tcp()
local scan = sock:connect("192.168.88.1", port)
sock:close()
if scan then
print("[\27[1m\27[91m+\27[0m] " .. port .. "")
print("Scanning...\r")
end
end
and the result:
[+] 21
Scanning...
[+] 22
Scanning...
[+] 23
Scanning...
[+] 53
Scanning...
[+] 80
Scanning...
[+] 2000
Scanning...
[+] 8291
Scanning...
[+] 8728
Scanning...
[+] 8729
Scanning...
and I want that as soon as a port is displayed the scanning... is erased and reappears at the end so just after the port.
I can't find much documentation on lua, sorry
i try all the print(en="\r") but it's not fine.

print automatically appends a newline character (\n), which you don't want. Instead, you should use io.write. This will however not flush, so you'll have to also use io.flush. Here's a simple example using the Unix sleep command to wait 1s between messages:
for i = 1, 10 do
io.write(("test %02d\r"):format(i))
io.flush()
os.execute("sleep 1")
end
This will continously overwrite test XX with the new value of i until i=10 is reached.

Related

How to start a Docker container as fast as possible?

Consider the following command line (running on Ubuntu):
docker run --network=host phw"
The container it references prints "Hello World" and exits:
FROM gcr.io/distroless/python3-debian11
ADD main.py .
CMD [ "python", "main.py"]
# Python script prints Hello World and exits.
Besides not using Python for this use case, how can I make this run as fast as possible?
Example timings
A parameter-less docker run clocks in at 977.8ms.
❯ hyperfine "docker run phw"
Benchmark 1: docker run phw
Time (mean ± σ): 977.8 ms ± 54.0 ms [User: 9.4 ms, System: 44.7 ms]
Range (min … max): 886.2 ms … 1059.6 ms 10 runs
With host networking, it is 2.4x faster!
❯ hyperfine "docker run --network=host phw"
Benchmark 1: docker run --network=host phw
Time (mean ± σ): 407.4 ms ± 25.8 ms [User: 1.6 ms, System: 51.6 ms]
Range (min … max): 372.0 ms … 449.4 ms 10 runs
What else can I do? And what are the tradeoffs?

Nmap NSE script sh: 1: not found

I am working on a scanner with Nmap. I am expanding this scanner with NSE scripts.
I have a script that runs 'Nuclei', using Nmap. This script is made and used by someone else, and it has worked before. However, when I run it now, I get the error: sh: 1: nuclei: not found.
Nuclei is (of course) installed on the system, and it works as root and normal user. It looks like Nmap doesn't have access to Nuclei, but how to fix?
The NSE script:
local shortport = require "shortport"
local stdnse = require "stdnse"
portrule = function(host,port)
return true
end
action = function(host,port)
local handle = ""
local always = stdnse.get_script_args("nuclei.always")
local hostname = stdnse.get_hostname(host)
if port.number == 80 then
handle = io.popen("nuclei -u http://" .. hostname .. " -nc -silent -etags intrusive -rl 30 -rlm 1000 -bs 8 -c 8")
elseif port.number == 443 then
handle = io.popen("nuclei -u https://" .. hostname .. " -nc -silent -etags intrusive -rl 30 -rlm 1000 -bs 8 -c 8")
elseif always == "yes" then
handle = io.popen("nuclei -u " .. hostname .. " -nc -silent -etags intrusive -rl 30 -rlm 1000 -bs 8 -c 8")
end
local result = handle:read("*a")
handle:close()
return result
end
The Nmap command:
nmap -script=nuclei.nse -p80,443 -T2 IPADDRESS
Nmap is installed using Snap. It runs on Ubuntu.
The solution was quite simple:
Install nmap using apt, instead of snap did the job.

Run dask scheduler and workers in Amazon's ECS (Fargate)

I tried to run a scheduler and workers docker containers on Amazon's ECS.
I'm using this example:
https://docs.dask.org/en/latest/setup/docker.html
The scheduler works perfectly, I successfully connected to it from my local machine:
distributed.scheduler - INFO - Remove client Client-0ae5b0fa
distributed.scheduler - INFO - Close client connection: Client-0ae5b0fa
distributed.scheduler - INFO - Remove client Client-0ae5b0fa
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-0ae5b0fa
I try to run the worker the same way, with this command:
dask-worker tcp://SCHEDULER_PUBLIC_IP:8786
The worker is writing these logs and exits:
+ exec 'dask-worker tcp://SCHEDULER_PUBLIC_IP:8786'
/usr/bin/prepare.sh: line 30: /dask-worker tcp://SCHEDULER_PUBLIC_IP:8786: No such file or directory
+ '[' '' ']'
no environment.yml
+ '[' -e /opt/app/environment.yml ']'
+ echo 'no environment.yml'
+ '[' '' ']'
+ '[' '' ']'
I expected the worker to connect to the scheduler, because the same commands worked when I tried them on an EC2 instance. Also, I tried doing this with all ports open to tcp connections and still nothing.
Environment:
Dask docker container version: 6bfa3b19b4be (1 AUG 2021) (latest)
Fargate version: 1.4.0 (latest)
Container has 2 vCPUs, 4 Gb memory
The problem was that my command was not comma delimited. it was:
dask-worker 1.1.1.1:8786
it is supposed to be:
dask-worker,1.1.1.1:8786
in order for docker to understand these are different arguments:
Command ["dask-worker","1.1.1.1:8786"]

How to make Flask app debug mode to True in a Docker container

I'm running a Flask app in a Docker container but I'm having issues in debugging. In my container I have three micro-services.
docker-compose.yml
version: '2.1'
services:
files:
image: busybox
volumes:
[..]
grafana:
[..]
prometheus:
[..]
aggregatore:
[..]
classificatore:
build: classificatore/.
volumes:
- [..]
volumes_from:
- files
ports:
- [..]
command: ["python", "/src/main.py"]
depends_on:
rabbit:
condition: service_healthy
testmicro:
[..]
rabbit:
[..]
In the classificatore service, I build up the Docker as follows:
classificatore/Dockerfile
FROM python:3
RUN mkdir /src
ADD requirements.txt /src/.
WORKDIR /src
RUN pip install -r requirements.txt
ADD . /src/.
RUN mkdir -p /tmp/reqdoc
CMD ["python", "main.py"]
In classificatore/main.py file
from time import time
from sam import firstRead, secondRead, lastRead, createClassificationMatrix
from sam import splitClassificationMatrix, checkIfNeedSplit, printMatrix
from util import Rabbit, log, moveFile
from uuid import uuid4
from flask import Flask, request, render_template, redirect, send_from_directory
import os
import configparser
import json
from prometheus_client import start_http_server, Summary, Counter
config = configparser.ConfigParser()
config.read('config.ini')
rabbit = Rabbit()
inputDir = os.environ['INPUT_DIR'] if 'INPUT_DIR' in os.environ else config['DEFAULT']['INPUT_DIR']
# Create a metric to track time spent
REQUEST_TIME = Summary('classification_processing_seconds', 'Time spent to process a SAM file')
COUNTER_INPUT_FILE_SIZE = Counter('input_sam_size', 'Sum of input SAM file size')
COUNTER_OUTPUT_FILE_SIZE = Counter('output_sam_size', 'Sum of output SAM file size')
start_http_server(8000)
#REQUEST_TIME.time()
def classification(baseNameFile, AU_SIZE):
nameFile = inputDir + "/" + baseNameFile
startTime = time()
numeroLetture = 1
file_id = str(uuid4())
log.info("Analizzo il file YYYYY (NomeFile: %s, Id: %s, AU_SIZE: %s)" % (nameFile, file_id, AU_SIZE))
rnameArray, parameter_set = firstRead(nameFile)
classificationMatrix = createClassificationMatrix(rnameArray)
log.info("Creo un numero di range che dovrebbe dividire il file in file da %s reads" % (AU_SIZE))
while (checkIfNeedSplit(classificationMatrix, AU_SIZE)):
classificationMatrix = splitClassificationMatrix(classificationMatrix, AU_SIZE)
log.info("Leggo il file di nuovo, perche' alcuni range sono troppo grandi")
classificationMatrix = secondRead(nameFile, classificationMatrix)
numeroLetture = numeroLetture + 1
printMatrix(classificationMatrix)
log.info("Sono state fatte %s letture" % (numeroLetture))
log.info("Adesso scrivo i file")
au_list = lastRead(nameFile, file_id, classificationMatrix, parameter_set['myRnameDict'])
COUNTER_INPUT_FILE_SIZE.inc(os.path.getsize(nameFile))
COUNTER_OUTPUT_FILE_SIZE.inc(moveFile(au_list, file_id))
rabbit.enque_tasks(parameter_set, au_list, file_id)
log.info("Tempo totale impiegato: %s sec" % int(time() - startTime))
app = Flask( __name__ , template_folder='./web')
#app.route("/")
def index(message=None):
log.info("Sono PRin index!!!")
samFiles = os.listdir(config['DEFAULT']['INPUT_DIR'])
samFiles = list(filter(lambda x: x.endswith('.sam'), samFiles))
samFiles.sort()
mpeggFiles = os.listdir(config['DEFAULT']['MPEGG_DIR'])
mpeggFiles.sort()
mpeggFiles = list(filter(lambda x: x.endswith('.mpegg'), mpeggFiles))
return render_template('index.html', samFiles=samFiles, mpeggFiles=mpeggFiles, message=message)
#app.route("/upload", methods=['POST'])
def upload():
f = request.files['file']
f.save(os.path.join(config['DEFAULT']['INPUT_DIR'], f.filename))
return index("Upload avvenuto con successo")
#app.route("/encode", methods=['POST'])
def encode():
filename = request.form['filename']
AU_SIZE = int(request.form['AU_SIZE'])
classification(filename, AU_SIZE)
return index("Encoding iniziato correttamente per il file: %s" % (filename))
#app.route('/download/<filename>', methods=['GET', 'POST'])
def download(filename):
log.info ("Download %s" % filename)
mpeggDir = config['DEFAULT']['MPEGG_DIR']
log.debug ("mpeggDir: %s" % mpeggDir)
filepath = os.path.join(mpeggDir, filename)
log.debug ("My filepath: %s" % filepath)
return send_from_directory(directory=mpeggDir, filename=filename)
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = False )
I build-up the app by running:
$ docker-compose build
$ docker-compose up -d
To check logs in classificatore:
docker logs <mycontainername>
If in classificatore/main.py
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = False )
I get
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
2019-05-03 08:38:25,406 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
If in classificatore/main.py I set debug to True
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = True )
I get
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
2019-05-03 08:40:57,857 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2019-05-03 08:40:57,858 * Restarting with stat
Traceback (most recent call last):
File "/src/main.py", line 22, in <module>
start_http_server(8000)
File "/usr/local/lib/python3.7/site-packages/prometheus_client/exposition.py", line 181, in start_http_server
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
File "/usr/local/lib/python3.7/socketserver.py", line 452, in __init__
self.server_bind()
File "/usr/local/lib/python3.7/http/server.py", line 137, in server_bind
socketserver.TCPServer.server_bind(self)
File "/usr/local/lib/python3.7/socketserver.py", line 466, in server_bind
self.socket.bind(self.server_address)
OSError: [Errno 98] Address already in use
I guess I'm messing around with ports but I'm still a newby in Docker.
Any help will be very welcome!
Thank you in advance
EDIT 1: the output of $docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb7c9a5b80eb encoder_mpeg-pre-encoder "python main.py" 2 minutes ago Up 12 seconds encoder_mpeg-pre-encoder_1
6a523161c191 encoder_classificatore "python /src/main.py" 2 minutes ago Exited (1) 11 seconds ago encoder_classificatore_1
e5d0287e9129 encoder_aggregatore "python /src/main.py" 5 minutes ago Up 12 seconds 0.0.0.0:8000->8000/tcp encoder_aggregatore_1
907327ef0342 grafana/grafana:5.1.0 "/run.sh" 6 minutes ago Up 18 seconds 0.0.0.0:3000->3000/tcp encoder_grafana_1
e57064e76aa1 busybox "sh" 6 minutes ago Exited (0) 18 seconds ago encoder_files_1
2b42907a31c4 rabbitmq "docker-entrypoint.s…" 6 minutes ago Up 18 seconds (healthy) 4369/tcp, 5671/tcp, 25672/tcp, 0.0.0.0:5672->5672/tcp encoder_rabbit_1
3f509108b69d prom/prometheus "/bin/prometheus --c…" 6 minutes ago Up 18 seconds 0.0.0.0:9090->9090/tcp encoder_prometheus_1
I guess you are changing the file inside docker. In an ideal environment, you need to change it at the host where actual development is happening & then build & run the compose.
Change classificatore/main.py file at docker host -
if __name__ == "__main__" :
app.run( host = '0.0.0.0' , debug = True )
Build and run the app again -
$ docker-compose build
$ docker-compose up -d
It's best to use environment variables in such cases so that you need not change your source code every time for the debug switch.
To build the compose again from scratch run below commands -
$ docker-compose down -v
$ docker-compose build --no-cache
$ docker-compose up -d
In case you still receive errors, share the output of docker ps after running above commands.
I could see below error -
File "/src/main.py", line 22, in <module>
start_http_server(8000)
Your docker ps output shows that something else is already running on port 8000, probably aggegatore. Post that start_http_server(8000) is also trying to run on port 8000 on the same network which is causing the conflict here. Try changing the ports in the desired way so that conflict doesn't occur.
EDIT 1:
By using DEBUG=True you are telling Flask to reload the server each time main.py changes. In doing so, it calls main.py each time, killing the app and then restarting it in port 5000 in the process. That is expected behaviour.
The problem is that you also have a call to start_http_server(8000) that creates a server in port 8000. Flask does not handle this process, leading to an Exception because the previous instance is already using the port.
The error traceback is clear about this (OSError: [Errno 98] Address already in use), but the hint is that the error happens after restarting the server.
2019-05-03 08:40:57,857 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2019-05-03 08:40:57,858 * Restarting with stat
Traceback (most recent call last):
File "/src/main.py", line 22, in <module>
start_http_server(8000)
File "/usr/local/lib/python3.7/site-packages/prometheus_client/exposition.py", line 181, in start_http_server
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
You'll need to handle the lifecycle of that service outside of the main.py script, or handle the exception.
EDIT 2:
Your problem has nothing to do with docker, and is more about setting up prometheus inside a Flask application. Note that prometheus_client/exposition.py is the one that is raising the exception.
There are some extensions that help with this, for instance:
https://github.com/sbarratt/flask-prometheus or
https://github.com/hemajv/flask-prometheus
Maybe this sheds some light on the solution as well, but please note that this is not what you are asking here.
EDIT 3:
I suggest first giving these extensions a shot, which means refactoring your code. From there and if there is a problem implementing the extensions, then create another question providing a mcve.

How to build a Dockerfile / container from scratch to simply provide a file

I am trying to create a very simple image from scratch to simply provide a file. But Im not a bit skilled writing dockerfiles (on learning queue).
Id like to whenever the container starts, it copy the file from the container into the local host upon the mapped volume, nothing else. Im struggling to get accomplish it, I can build the image fine but I am unable to put the files into local box.
Dockerfile
FROM scratch
ADD datafile.dat /datafile.dat
ADD true /true
VOLUME ["/tmp/"]
COPY datafile.dat /tmp
CMD ["/true"]
'true' is nothing but an echo program. I am build it with
docker build -t datafile:latest -f Dockerfile .
Apparently, build goes fine. But when I try to run it I get nothing, and container exists with error.
$ docker run -dt -v /:/tmp datafile:latest
0a317b5d7459c86ff260513093d115f94be74671ebc08bc125d9e871a55695c5
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a317b5d7459 datafile:latest "/true" 6 seconds ago Exited (1) 4 seconds ago laughing_keller
$ docker logs 0a3
standard_init_linux.go:195: exec user process caused "no such file or directory"
From building directory
$ ll
total 36
-rw-rw-r--. 1 vUser vUser 0 Aug 23 12:59 datafile.dat
-rw-rw-r--. 1 vUser vUser 126 Aug 23 13:06 Dockerfile
-rwxr-xr-x. 1 vUser vUser 28920 Aug 23 13:05 true
You can use docker copy command to copy the file from Container to local disk.
docker cp <containerId>:/file/path/within/container /host/path/target
Also you can first try in interactive mode and try to copy the file manually and see what happens.
docker run -i --rm -v ${PWD}/tmp:/tmp/ datafile:latest bash
I ended up creating a very small 'cat like' binary in go and building the image as:
FROM scratch
COPY FILE_I_WANT_TO_SHARE /
COPY catgo /
ENTRYPOINT ["/catgo", "/FILE_I_WANT_TO_SHARE"]
Go code if anyone's interested on that:
package main
import (
"os"
"log"
"io"
)
func readWrite(src io.Reader, dst io.Writer) {
}
func main() {
if len(os.Args) == 1 {
_, err := io.Copy(os.Stdout, os.Stdin)
if err != nil {
log.Fatal(err)
}
} else {
for _,fname := range os.Args[1:] {
fh, err := os.Open(fname)
if err != nil {
log.Fatal(err)
}
_, err = io.Copy(os.Stdout, fh)
if err != nil {
log.Fatal(err)
}
}
}
}

Resources