I am running guacamole using docker image. I want to record RDP session. I've recorded RDP session, which is in raw format. In guacamole doc there is a utility called guacenc which convert recorded file data into a .m4v video format by using this command.
guacenc /path/to/recording/NAME.
Here I do not know, where I've to run this command.
You have to be logged into a terminal session on your server.
Related
I am trying to setup a single docker that can stream local file as rtsp to a port.
Meaning, within the docker there will be some local videos publish as rtsp to a port of that docker.
Then externally, I can fetch the stream from rtsp://:/mystream
I tried looking into rtsp-simple-server, but it does not seem to have the option of local file streaming, rather it requires first set up a docker server then using ffmpeg to publish video to that server.
Is there a way to achieve the wanted single docker RTSP stream server?
There is another response of building a docker with VLC installed, however it seem to be bulky and overkill, plus the outcome does not seem to be as smooth.
Hi,
Most RTSP Servers works like you descripe you have a server instance and publish then a stream to them.
Its not hard to build a own RTSP server with gstreamer and python here is a link ,look at the answer, the did exactly what u need.
good start a for a new program project =)
I'm trying to self hosted videos with nginx using nginx-rtmp-module (VOD) similar to youtube.
I successfully hosted videos by using ffmpeg to convert mp4 file to dash chunks.
I want my site can
upload video
Containerized golang app save file to local
run ffmpeg script to convert to dash chunks
How can I handle the third step ?
Is there a better way to make a VOD self hosted service ?
I run /usr/bin/ffmpeg but output not found
That is because the executable ffmpeg mounted from the host would depend on dynamic libraries either not present in the Docker image, or not at the right version (see ldd or lddtree for analysis)
It is better to build a dedicated image with the right tools installed in it rather than relying on the host content for program execution.
we have a script to download files from Telegram Channel, using Telethon library for Python.
To create a Telethon instance, we using TelegramClient constructor. This method asks the user to insert his Telegram number to the console, then Telegram sends a security number, that should be written back to the console.
This authentication saved in Object/File/DB called session, so in the next execution, the TelegramClient will not ask for the phone number again.
Now, I want to create a Docker image for the script, and it's mean that when the user will create a container from the published Image, he will have to do an authentication process, and this is the question:
Which ways we have to do this authentication at most automatic as possible?
We can use Docker tricks, Telegram/Telethon tricks, and maybe Python tricks...
I will try to suggest one option to solve this.
We can save the session in the host file system, and set the location of the session as a volume for the docker container.
Then we can create a script for authenticating and creating this session, out of a container, and when the container will start it will have a session already.
You can use StringSession for saving and getting access for the Telethon client.
Just generate a session as a simple string and save it in the docker secret.
https://docs.telethon.dev/en/latest/concepts/sessions.html#string-sessions
You can do this by creating a bind-mount for your session file plus any config data -- I recommend using something like python-dotenv. You can set this up in your Dockerfile, as well as Docker Compose. See here for the Dockerfile and here for Docker Compose.
Just ensure that you set a sane path to the session file within your container.
I can't get Document to PDF Converter app to work on a Nextcloud installation using docker and docker-compose
I'm setting up Nextcloud at home using docker images. So far I've managed to run Nextcloud successfully following this tutorial: https://blog.ssdnodes.com/blog/installing-nextcloud-docker/
What I'm trying now, is to enable the Document to PDF Converter app:
I installed and enabled it using Nextcloud web interface
I set up rules to match all libreoffice MIME types (the same rules I used to add tags against the libreoffice files, and tags are working, so I assume the rules are good)
I (manually) installed libreoffice inside the docker container of Nextcloud
I added:
'preview_office_cl_parameters' =>
' --headless --nologo --nofirststartwizard --invisible --norestore '.
'--convert-to png --outdir ',
'preview_libreoffice_path' => '/usr/bin/libreoffice',
to config.php
I restarted all the containers
But I see no pdf created when I create/update a libreoffice document.
To test it, I created an odt document on my pc and let the nextcloud desktop client sync it with the server. The file is uploaded (and updated if I modify it) but no pdf has been created.
Is there anyone who configured pdf conversion in this scenario, or can you help me to find some clue?
Thanks.
I use jack to route audio between multiple sound cards in my pc.
To record the audio i use a very convenient FFmpeg command which creates a writable jack client:
ffmpeg -f jack -i <client_name> -strict -2 -y <output_file_name>.
so far this works very well.
The problem starts here:
I also have an nginx docker which records my data and makes it available for streaming. when trying to use the same command inside the docker i get the following error:"Unable to register as a JACK client".
I started to look in to the FFmpeg code and found out that the FFmpeg command calls the jack_client_open command from the jack API, which fails.
Seems like there is some kind of a problem in the connection between the FFmpeg request from inside the docker to the jackd server running on the host.
Is there a simple way to create a connection between the two [exposing ports]?
(I saw some solutions like netjack2, but before creating a more complex server-client architecture i'd like to find a more elegant solution).
Thanks for the help!
I've just got this working, and I required the following in my docker run commands:
--volume=/dev/shm:/dev/shm:rw
--user=1000
So that the container is running a user which can access files in /dev/shm from a jackd spawned from my host user account. This wouldn't be required if your jackd and the container are both running as user root.
You can confirm its working by running jack_simple_client in the container, you should get a beep.