Force Docker:Jellyfin to use existing config data on Synology - docker

i previously ran jellyfin on my desktop comuputer and put a lot of work into the manual creation of collection (descriptions, folder pictures etc.). Now i want to implement jellyfin on my brandnew Synology DS220+ NAS, which is running Jellyfin on Docker. If my understanding of Docker is correct, it is running an instance of the jellyfin app. So while its running, i am not able to see the folders/files of jellyfin in my FileStation-Browser.
So my question is: How can i force Jellyfin/Docker to use the existing Jellyfin-Collection-Data from my desktopPC (which are bacically .xml files).
Thanks in advance!!

You need to map filepaths for config, cache and media. From the jellyfin docs:
docker run -d -v /srv/jellyfin/config:/config -v /srv/jellyfin/cache:/cache -v /media:/media --net=host jellyfin/jellyfin:latest
Arguments of -v flag means volume mapping. Left of colon(:) is your machine's path while right side is the path viewed by jellyfin in its container.
If you don't map internal volumes of a container with your machine, you cannot view those files and the data created will be destroyed on container shutdown.
So basically replace the config,cache and media paths of the above command with your folder paths

The docker run hello-world by #vanshaj is ment to be executed in the terminal window (ssh session) and not in a docker container.
Another approach would be to install the jellyfin package of synocommunity.com. This package does not use docker and is available since just a few days ago.

Related

How to work with the files from a docker container

I need to work with all the files from a docker container, my approach is to copy all the list of files from the container to my host.
I'm using the next docker commands, for example with the postgres image:
docker create -ti --name dummy_1 postgres bash
docker cp dummy_1:/. Documents/docker/dockerOne
With this I have all the container folders and files in my host.
And then the idea is to transverse all the files with the java API, and work with them and finally delete the files and folders from local, but I would like to know if is it a better approach, maybe with Java and access directly to the container files, instead of create a local copy of the container files in my host.
Any ideas?
You can build a small server app inside your docker container which feeds you the information you need at an exposed port. Thats how i would have done it.
Maybe I don't understand the question, but you can mount a volume when you run, not create the container
docker run -v /host/path:/container/path your_container
Any code in the container (e.g. Java) that modifies files at /container/path will be reflected on the host, and not need to be copied back in/out. Similarly, any modifications on the host filesystem will be seen in the container.
I don't think I can implement an API in the docker container
Yes you can. You bind a TCP port using -p flag

docker run not syncing local folder in windows

I want to sync my local folder with that of a docker container. I am using a windows system with Wsl 2 backend. I tried running the following command as per the instructions of a docker course instructor but it didn't seem to have synced it:
docker run -v ${pwd}:\app:ro --env-file ./.env -d -p 3000:4000 --name node-app node-app-image
I faced a similar issue when I started syncing local folders with that of a docker container in my windows system. The solution was actually quite simple, instead of using -v ${pwd}:\app:ro in your first volume it should be -v ${pwd}:/app:ro. Notice the / instead of \. Since your docker container is a Linux container the path should have /.
As #Sysix pointed out, docker will always overwrite the folder in the container with the one on the host (no matter if it already existed or not). Only those files will be in that folder/volume that were created either on the host, or in the container during runtime.
Learn more about bind mounts and volumes here.

Access files on host server from the Meteor App deployed with Meteor Up

I have a Meteor App deployed with Meteor UP to Ubuntu.
From this App I need to read a file which is located outside App container on the host server.
How can I do that?
I've tried to set up volumes in the mup.js but no luck. It seems that I'm missing how to correctly provide /host/path and /container/path
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
Read the docs for Docker mounting volumes but obviously can't understand it.
Let's say file is in /home/dirname/filename.csv.
How to correctly mount it into App to be able to access it from the Application?
Or maybe there are other possibilities to access it?
Welcome to Stack Overflow. Let me suggest another way of thinking about this...
In a scalable cluster, docker instances can be spun up and down as the load on the app changes. These may or may not be on the same host computer, so building a dependency on the file system of the host isn't a great idea.
You might be better to think of using a file storage mechanism such as S3, which will scale on its own, and disk storage limits won't apply.
Another option is to determine if the files could be stored in the database.
I hope that helps
Let's try to narrow the problem down.
Meteor UP is passing the configuration parameter volumes directly on to docker, as they also mention in the comment you included. It therefore might be easier to test it against docker directly - narrowing the components involved down as much as possible:
sudo docker run \
-it \
--rm \
-v "/host/path:/container/path" \
-v "/second/host/path:/second/container/path" \
busybox \
/bin/sh
Let me explain this:
sudo because Meteor UP uses sudo to start the container. See: https://github.com/zodern/meteor-up/blob/3c7120a75c12ea12fdd5688e33574c12e158fd07/src/plugins/meteor/assets/templates/start.sh#L63
docker run we want to start a container.
-it to access the container (think of it like SSH'ing into the container).
--rm to automatically clean up - remove the container - after we're done.
-v - here we give the volumes as you define it (I here took the two directories example you provided).
busybox - an image with some useful tools.
/bin/sh - the application to start the container with
I'd expect that you also cannot access the files here. In this case, dig deeper on why you can't make a folder accessible in Docker.
If you can, which would sound weird to me, you can start the container and try to access into the container by running the following command:
docker exec -it my-mup-container /bin/sh
You can think of this command like SSH'ing into a running container. Now you can check around if it really isn't there, if the credentials inside the container are correct, etc.
At last, I have to agree it #mikkel, that it's not a good option to mount a local directoy, but you can now start looking into how to use docker volume to mount a remote directory. He mentioned S3 by AWS, I've worked with AzureFiles on Azure, there are plenty of possibilities.

Docker: using a bind mount locally with swarm

Docker newcomer here.
I have a simple image of a django website with a volume defined for the app directory.
I can bind this volume to the actual folder where I do the development with this command :
docker container run --rm -p 8000:8000 --mount type=bind,src=$(pwd)/wordcount-project,target=/usr/src/app/wordcount-project wordcount-django
This works fairly well.
Now I tried to push that simple example in a swarm. Note that I have set up a local registry for the image to be available.
So to start my service I'd do :
docker service create -p 8000:8000 --mount type=bind,source=$(pwd)/wordcount-project,target=/usr/src/app/wordcount-project 127.0.0.1:5000/wordcount-django
It will work after some tries but only because it run on the local node (where the actual folder is) and not a remote node (where there is no wordcount-project folder).
Any idea how to solve this so that this folder can be accessible to all node and yet, still be accessible locally for development ?
Thanks !
Using bind-mount in docker swarn is not recommended, as you can read in the doc. In particular :
Important: Bind mounts can be useful but they can also cause problems. In most cases, it is recommended that you architect your application such that mounting paths from the host is unnecessary.
However, if you still want to use bind-mount, then you have two possibility :
Make sure your folder exists on all the nodes. The main problem here is that you'll have to update it everytime on every node.
Use a shared filesystem (such as sshfs for example) and mount it on a directory on each node. However, now that you have a shared filesystem, then you can just use a docker data volume and change the driver.
You can find some documentation on changing the volume data driver here

Docker: data volume container is not instantiated with `docker create` command

So, I'm trying to package my WordPress image in a way that all files except the uploads are persisted. In order to do so, I have created my Dockerfile which uses the official WordPress image as its base, and adds the files from an archive (containing all the WordPress files, themes, plugins, etc.), like so:
FROM wordpress
ADD archive.tar.gz /var/www/html/
Since I want the uploads to be persisted, I have created a separate data volume container, e.g. test2.com-wp-data:
docker create -v /var/www/html/wp-content/uploads —name test2.com-wp-data wordpress
Then I simply mount it via —-volumes-from flag:
docker run —name test2.com --volumes-from test2.com-wp-data -d --link test2.com-mysql:mysql myimage
However, when I inspect my newly created container, I cannot find /var/www/html/wp-content/uploads:
# docker inspect -f '{{.HostConfig.VolumesFrom}}' test2.com
[test2.com-wp-data]
# docker inspect -f '{{.Volumes}}' test2.com
map[/var/www/html:/var/lib/docker/vfs/dir/4fff1d36d5aacd0b2c73977acf8fe680bda6fd891f2c4410a90f6c2dca4aaedf]
I can see that both /var/www/html and /var/www/html/wp-content/uploads are set up as volumes in my test2.com-wp-data data container:
# docker inspect -f '{{.Config.Volumes}}' test2.com-wp-data
map[/var/www/html:map[] /var/www/html/wp-content/uploads:map[]]
I know that the wordpress image by default creates a /var/www/html volume, for which I don't really mind, but does that mean that anything that is below that folder is ignored if mounted separately? Will I need to build my own WordPress image in order to have /var/www/html/wp-content/uploads set as a volume in my WordPress container?
Thank you very much for your time!
EDIT: I've tested a different setup with a folder that has nothing to do with /var/www/html, and the result is the same: —-volumes-from is ignored.
Version 1.4 + of docker should be what you need to get this working. Older versions of docker don't seem to play nicely with data-only containers instantiated with "create" rather than "run".
Well, after some further testing I've realised that despite what the documentation indicates, docker create alone is not enough to get a working data volume working. I've only managed to get working data volumes by instantiating them with the docker run command, as follows:
docker run —-name data -v /var/www/html/wp-content/uploads mysql true
This way the container exits immediately, but if I use it to attach the data volume to another container it works as expected.
If anyone knows any specific reason behind this behaviour, I'd be glad to learn more, especially since the documentation seems to be misleading.
Thanks!
EDIT: It turns out I was using Docker 1.3.x, which hadn't implemented this feature yet, hence why the documentation was misleading for me!

Resources