Telethon TelegramClient authentication in Docker container - docker

we have a script to download files from Telegram Channel, using Telethon library for Python.
To create a Telethon instance, we using TelegramClient constructor. This method asks the user to insert his Telegram number to the console, then Telegram sends a security number, that should be written back to the console.
This authentication saved in Object/File/DB called session, so in the next execution, the TelegramClient will not ask for the phone number again.
Now, I want to create a Docker image for the script, and it's mean that when the user will create a container from the published Image, he will have to do an authentication process, and this is the question:
Which ways we have to do this authentication at most automatic as possible?
We can use Docker tricks, Telegram/Telethon tricks, and maybe Python tricks...

I will try to suggest one option to solve this.
We can save the session in the host file system, and set the location of the session as a volume for the docker container.
Then we can create a script for authenticating and creating this session, out of a container, and when the container will start it will have a session already.

You can use StringSession for saving and getting access for the Telethon client.
Just generate a session as a simple string and save it in the docker secret.
https://docs.telethon.dev/en/latest/concepts/sessions.html#string-sessions

You can do this by creating a bind-mount for your session file plus any config data -- I recommend using something like python-dotenv. You can set this up in your Dockerfile, as well as Docker Compose. See here for the Dockerfile and here for Docker Compose.
Just ensure that you set a sane path to the session file within your container.

Related

How to modify the configuration of the database in docker of opengauss

Recently, I was trying to deploy the opengauss database using docker, and I saw that this docker was released by your company.
Currently encountered the following two problems:
The corresponding database configuration file was not found: “hab.conf or postgreq.conf”, where is the location of this file in the docker image? If not, can it be gs_*modified by tools.
When the database in docker is started and then restarted, the docker image will be launched, and there are no parameters linked to the configuration file in the docker image, so there is no way to modify the configuration file of the database. At present, the solution I think of is to “running container”directly “commit & save” the modified image into a new image. Is this the only solution?
.hba.conf or postgreq.conf is here
/var/lib/opengauss/data, support to use gs_guc to modify parameters.
.After changing the parameters that require database restart to take effect, just restart the container directly.
.You can also do persistence if you want, specify it through the -v parameter when running.
-v /enmotech/opengauss:/var/lib/opengauss

securing local files/keys/tokens inside docker container

This is related to docker container which is deployed in microk8s cluster. The container when deployed thru k8s with host volume mounted inside it. when the container runs, it makes few keys and token generation to establish a secure tunnel with another container outside of this node. The container creates those keys inside the provided mount path. The keys and token which are generated are created as plain files (like public.key, private.key, .crt, .token etc) under the mounted path inside container. Also the tokens are refreshed in some time interval.
Now I want to secure those tokens/keys which are generated post container runs so that it can't be accessed by outsiders to harm the system/application. Something kind of vault store, but I want to maintain inside container or outside the container on host in some encrypted way. So that whenever the container application wants the files, it can decrypt from that path/location and use it.
Is there any way this can be achieved inside docker container system based on Ubuntu 18 host OS and k8s v1.18. Initially I thought of linux keyrings or some gpg encrypt mechanism. But I am bot sure whether it can affect the container runtime performance or not. I am fine to implement any code in python/c to encrypt/decrypt the files for the application inside container. But the encryption mechanism should be fips compliant or industry standard.
Also anyway we can encrypt the directory where those keys are generated and use it decrypting when needed by the application..or some directory level permission we can set so that it can't read by other users to make those files secure.
thanks for reading this long post. However I donot have a clear solution for this as of now. any pointers and suggestion in this regard is much appreciated.
thanks

Is it possible to start a docker container with some env variables from the docker API

I'm using docker API to manage my containers from a front-end application and I would like to know if it was possible to use /container/{id}/start with some environnement variables, i can't find it in the official doc.
Thanks !
You can only specify environment variables when creating a container. Starting it just starts the main process in the container that already exists with its existing settings; the “start” API call has almost no options beyond the container ID. If you’ve stopped a container and want to restart it with different options, you need to delete and recreate it.

How to store private in server safely?

I was thinking what is the most secure place to store private data (credentials to DB for example).
I see 2 options:
in environment variables
in a file
2nd option seems more secure, especially when you set chmod a-rwx on the file and only sudo users can read it.
When we run docker container, the code inside has root access by default.
So what do you think about this idea:
create a file with empty access (chmod a-rwx private.txt)
run a docker and provide the file to it: docker run -v=$(pwd):/app php:7.3-alpine3.9 cat /app/private.txt
docker has to be in sudo group
Now, when a hacker break into the server he will not be able to read credentials stored in private.txt file. Our program in docker container can read a file. The hacker needs a root access, but with root access he can do whatever he wants.
What do you think about this idea? Is it secure?
If you intend to use swarm, you can check Docker's article about "Manage sensitive data with Docker secrets"
Regarding your secret file, without going into the cons and pros of that method, if your program has an exploitable vulnerability, a hacker could potentially gain access to your files on behalf of the running program, etc.

does docker remote api support creating a container using a docker-compose file?

I have an app that creates docker containers using the docker remote api, which is done using this library.
So far it is working fine with simple configuration options for the container creation. Now I need to create the container with much more config options, so wondering if i can use a docker-compose file. This api is created based on v1.23 of docker remote api spec, does docker remote api support creating a container using a compose file?
I cannot find an option from this documentation. but wondering if i am looking in wrong place.
No; Docker Compose itself is an application that uses the API. You’d need to directly run docker-compose up or something similar as a shell command if you wanted to directly use it.
(You might be able to hack into its internals if you have a Python program, but not from Java.)

Resources