Running plex as a docker container, cannot access web UI - docker

I am trying to set up my Plex server using docker. I have followed the steps on the LinuxServer.io docker page. When I run the docker command, it says that it is running find and I get no errors. However, when I try to access the web UI through localhost:32400/web, all I get is "Problem loading page"
I am using docker for windows with Linux containers.
docker command:
docker run -d --name=plex --net=host -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -e VERSION=docker -v D:\plex\config:/config -v D:\media\tvseries:/data/tvshows -v D:\media\movies:/data/movies -v D:\media\transcode:/transcode --restart unless-stopped linuxserver/plex
when I use docker ps the plex container looks like it is running.
I am new to docker. I have looked around and cannot find why I cannot access the UI.
Please me know if you require additional information.
docker inspect:
"NetworkMode": "host",
"PortBindings": {
"32400/tcp": [
{
"HostIp": "",
"HostPort": "32400"
}
]
},
please let me know if you require more information

--net=host not work for docker-for-windows.
Reasons:
Linux container need to share a linux host' kernel.
In order to achieve this, when docker for windows run a linux container, it will had to setup a hyper-v machine. If you open the Hyper-V manager, you will see a MobyLinuxVM running.
So, when you use --net=host, the container will just use the network of MobyLinuxVM, not the windows. So, localhost will not work.
Suggestion:
For your scenario, I suggest you to remove --net=host, add port mapping in command line:
docker run -d --name=plex -p 32400:32400 -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -e VERSION=docker -v D:\plex\config:/config -v D:\media\tvseries:/data/tvshows -v D:\media\movies:/data/movies -v D:\media\transcode:/transcode --restart unless-stopped linuxserver/plex
Then, magic will happen here, docker for windows will map the windows's 32400 port to your container using windows route mechanism. And you can visit container's service from windows.

Related

Hot reload not working with webpack-dev-server and docker

Using Ubuntu Linux with docker installed. No VM.
I have build a docker image with a vuejs application. To enable hot reload I start the docker container with:
docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
It starts up fine and I can access it from my host browser on localhost:8081. But when I make changes to the source files and save those changes they are not reflected in my browser before I press F5 (hot reload does not work).
Some details below:
package.json
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
build/webpack.dev.conf.js
devServer: {
clientLogLevel: 'warning',
...
hot: true,
...
watchOptions: {
//poll: config.dev.poll,
//aggregateTimeout: 500, // delay before reloading
poll: 100 // enable polling since fsevents are not supported in docker
}
Tried to modify the watchOptions but it has no effect.
EDIT:
Based on below answer I have tried to pass: CHOKIDAR_USEPOLLING=true as an environment variable to docker run:
docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -e "CHOKIDAR_USEPOLLING=true" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
But it has not effect - still not able to hot reload my changes. Also in the provided link it says:
Update/Clarification: This problem only occurs when running your
Docker engine inside a VM. If you are on Linux for both Docker and for
coding you do not have this problem.
So don't think the answer applies to my setup - I am running Ubuntu Linux on my machine where I have installed docker. So no VM setup.
Another update - based on the comment below on changing the port mapping:
# Hot reload works!
docker run -it -p 8080:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
# Hot reload fails!
#docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
So if I port map to 8080:8080 instead of 8081:8080 hot reload works! Notice the application comes up in both cases when I access it on my host browser on localhost on the before mentioned ports. Its just that hot reload only works when I map the application to 8080 on my host.
But why??
Now if I do:
PORT='8081'
docker run -it -p "${PORT}:${PORT}" -e "HOST=0.0.0.0" -e "PORT=${PORT}" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
Hot reload of course works. But still not sure why I cannot map internal container port 8080 to 8081 externally on the host.
Btw; I don't see the problem at all if I use vue-cli-service serve instead - everything works out of the box.
I am not a VueJS user at all, never worked with it, but I use Docker heavily for my development workflow, and in the past I experienced a similar issue.
In my case the Javascript that was sent to the browser was trying to connect with the internal port of the docker container 8080, but once the mapped one for the host was 8081, the JS in the browser was not able to reach 8080 inside the docker container, therefore hot reload was not working.
So it seems to me that you have the same scenario as me, thus you need to configure in your VueJS app the hot reload to listen in the same port you want to use in the host, or just use the same port for both as you already concluded that it works.
If watchOptions doesnt work, you can try out the other option:
environment:
- CHOKIDAR_USEPOLLING=true
As per docs here:
“If watching does not work for you, try out this option. Watching does not work with NFS and machines in VirtualBox.”
Reference:
https://daten-und-bass.io/blog/enabling-hot-reloading-with-vuejs-and-vue-cli-in-docker/
It's been asked a long time ago, but I had the same problem and then noticed there's a sockPort within devServer config object that let's you set the port used by the websocket connection to communicate with the server for live/hot-reloading purposes.
What I did is set this option via an environment variable and it worked just fine when accessing the dev server from outside the container.

Docker zookeeper image unable to connect to sheepkiller/kafka-manager image

I am using two images sheepkiller/kafka-manager/ (Tool from Yahoo Inc) but the image was made by someone with a weird sense of humor but it has good reviews.
And zookeeper
I start ZooKeeper
docker run --it --restart always -d zookeeper
then try to start apache manager
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="your-zk.domain:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Document says:
(if you don't define ZK_HOSTS, default value has been set to "localhost:2181")
Error:
Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#7bf272d3
[info] o.a.z.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
[info] k.m.a.KafkaManagerActor - zk=localhost:2181
[info] k.m.a.KafkaManagerActor - baseZkPath=/kafka-manager
[warn] o.a.z.ClientCnxn - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I am using Docker version 17.12.0-ce, build c97c6d6 on windows 10. I have tried several different things but was unsuccessful. I am assuming there is an issue with the ports, I zookeeper config file and /sheepkiller/kafka-manager/dockerfile/ but I am not sure how to change these images after I already pulled them if that really is the case.
The following should work fine:
docker network create zookeeper-net
docker run -it --restart always -p 2181:2181 --network zookeeper-net --name zookeeper -d zookeeper
docker run -it --rm -p 9000:9000 -e ZK_HOSTS="zookeeper:2181" -e APPLICATION_SECRET=letmein sheepkiller/kafka-manager
Update:
There is also a compose file to setup everything. I suggest you use that.
docker-compose up -d

Link docker containers (Drupal and MariaDB)

To start I built a docker container from the MariaDB docker image.
After that I loaded a database dumpfile in the running container.
[MariaDB status][1]
Everything goes fine.
When I want to run/link the Drupal image:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -d drupal
I can reach the drupal installation page, but when I want to load the database I always have the same errors:
-host, pass or dbname is wrong.
But I'm pretty sure my credentials are right.
It seems that my drupal container can't find the mariadb image.
Docker links is a deprecated feature and should be avoided: https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
I assume you have a container named mariadbdocker running.
If you gain bash access inside drupaldocker container, you should be able to ping mariadb alias like this:
docker run --name drupaldocker --link mariadbdocker:mariadb -p 8089:80 -it drupal /bin/bash
If ping succeeds then you probably still have credentials issue.

Docker-local volume mount to the container:Connection refused

I am quite new to the world of docker and I am trying to set this up:
Running a solarwinds whd container and trying to mount a local volume on the host using this command:
docker run -d -p 8081:8081 --name=whdinstance -v pwd:/usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest
This starts the container and the volume is mounted but as soon as I go to localhost:8081 to login to the web helpdesk portal it asks me to select the database and then says "Connection refused" See Screenshot
can someone please help, if this might be an issue with the way I am mounting the volume?
Here exemples of how using volumes:
For use directory volume
docker run -itd -p 80:80 --name wordpress -v /path/in/your/host :/path/in/your/container wordpress
You have to put you -v and then the path of your directory in your container : the path of your shared directory on your host. When you done this you can choose your image !
So for you it should be something like
docker run -itd -p 8081:8081 --name=whdinstance -v /usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest

Bind to docker socket on Windows

On *nix systems, it is possible to bind-mount the docker socket from the host machine to the VM by doing something like this:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Is there an equivalent way to do this when running docker on a windows host?
I tried various combinations like:
docker run -v tcp://127.0.0.1:2376:/var/run/docker.sock ...
docker run -v "tcp://127.0.0.1:2376":/var/run/docker.sock ...
docker run -v localhost:2376:/var/run/docker.sock ...
none of these have worked.
For Docker for Windows following seems to be working:
-v //var/run/docker.sock:/var/run/docker.sock
As the Docker documentation states:
If you are using Docker Machine on Mac or Windows, your Engine daemon
has only limited access to your OS X or Windows filesystem. Docker
Machine tries to auto-share your /Users (OS X) or C:\Users (Windows)
directory. So, you can mount files or directories on OS X using:
docker run -v /Users/<path>:/<container path> ...
On Windows, mount directories using:
docker run -v /c/Users/<path>:/<container path> ...
All other paths come from your virtual machine’s filesystem, so if you
want to make some other host folder available for sharing, you need to
do additional work. In the case of VirtualBox you need to make the
host folder available as a shared folder in VirtualBox. Then, you can
mount it using the Docker -v flag.
With all that being said, you can still use the:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
The first /var/run/docker.sock refers to the same path in your boot2docker virtual machine.
For example, when I run my own Jenkins image using the following command in a Windows machine:
$ docker run -dP -v /var/run/docker.sock:/var/run/docker.sock alidehghanig/jenkins
I can still talk to the Docker Daemon in the host machine using the typical docker commands. For example, when I run docker ps in the Jenkins container, I can see running containers in the host machine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
65311731f446 jen... "/bi.." 10... Up 10.. 0.0.0.0:.. jenkins
Just to top it off on the answers provided earlier
When using docker-compose, one must set the COMPOSE_CONVERT_WINDOWS_PATHS=1 by either:
1) create a .env file at the same location as the project's docker-compose.yml file
2) in the CLI set COMPOSE_CONVERT_WINDOWS_PATHS=1
before running the docker-compose up command.
source
This never worked for me on Windows 10 even if it is a linux container:
-v /var/run/docker.sock:/var/run/docker.sock
But this did:
-v /usr/local/bin/docker:/usr/bin/docker
Solution taken from this issue i opened: https://github.com/docker/for-win/issues/4642
Some containers (eg. portainer) work fine with -v /var/run/docker.sock:/var/run/docker.sock
The jenkins container required --user root permissions on the docker run command to successfully access the Docker UNIX socket (using Docker-Desktop on Windows).
By default, a unix domain socket (or IPC socket) is created at
/var/run/docker.sock, requiring either root permission, or docker
group membership.
Source: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
--group-add docker had no effect using Docker-Desktop on Windows.
To bind to a Windows container you need to use pipes.
-v \\.\pipe\docker_engine:\\.\pipe\docker_engine
What it was suitable for me in Windows 10 was:
-v "\\.\pipe\docker_engine:\\.\pipe\docker_engine"
Have in mind that I was trying to access to portainer that I do recommend a lot it's a great app. For that I use this command:
docker run -d -p 9000:9000 -v "\\.\pipe\docker_engine:\\.\pipe\docker_engine" portainer/portainer
And then just go to:
http://localhost:9000/
I never made it worked myself, but i know it works on windows container on docker for windows server 2016 using this technique:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
We actually have at the shop vsts-agents on windows containers that uses the host docker like that:
# listen using the default unix socket, and on 2 specific IP addresses on this host.
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
# then you can execute remote docker commands (from container to host for example)
$ docker -H tcp://0.0.0.0:2375 ps
This is what actually made it work for me
docker run -p 8080:8080 -p 50000:50000 -v D:\docker-data\jenkins:/var/jenkins_home -v /usr/local/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -u root jenkins/jenkins:lts
it works well :
docker run -it -v //var/run/docker.sock:/var/run/docker.sock -v /usr/local/bin/docker:/usr/bin/docker ubuntu

Resources