How Can I play audio in docker container with firefox? - docker

I got firefox working in a docker container but I can't hear audio visiting YouTube. The volume is at max. Below is my Dockerfile:
FROM linuxmintd/mint20-amd64
RUN apt-get update && apt-get upgrade -y
RUN apt-get install iputils-ping net-tools xauth x11-apps firefox pulseaudio alsa-utils sox -y
RUN xauth add geekfreak/unix:0 MIT-MAGIC-COOKIE-1 9b68dc97c89fb50bc405a86c8d4b58b5
RUN useradd -m -s /bin/bash geek
CMD ["/usr/bin/firefox","www.google.com"]
After I built the image, I ran the command below.
docker run -it -e DISPLAY --net=host --name firefox --user 1000 --device=/dev/snd lm20:1.1
firefox opens. But visiting youtube I can't hear any audio. Is there a fix for this or docker can't play audio in a container? Thanks

You did share the --device=/dev/snd of the host with the container but the user in the container probably has not the necessary permissions to access the device directly or even if so the device is already occupied by a host process.
Instead of trying to access the hardware directly from the container you should rather use the client/server architecture of pulseaudio, i.e., expose a connection to the pulseaudio server - running as normal desktop process directly on the host - to pulseaudio client running in the container.
The x11docker wiki has a detailed guide for this.

Related

linux curl can't find docker webserver

I have been Googling this for 2 days and the only work around I have found is using --net=host vs. -p 8081:80 to connect to the web server. I have create a basic web server on a normal non web server RH 7 box. The expose port is 80. I compile and start the container and 2 web pages are copied in. In the container "curl http://localhost/index.html" writes out "The Web Server is Running". Out side curl times out with "curl: (7) Failed connect to localhost:80; Connection refused". All the post say it should work, but it doesn't.
The container was create as follows:
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data -p 8081:80 --name=d_webserver webserver
I have done docker inspect d_webserver and see the Gateway": "172.17.0.1" and "IPAddress": "172.17.0.2" are this. curl http://localhost:8081/index.html or curl http://172.17.02:8081/index.html all fail. Only if I use
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data --net=host --name=d_webserver webserver
does it work as expected. From all I have read, the -p 8081:80 should allow me to see the web page, but it just doesn't work. There are no firewall up, so that's not the problem. Does anyone know why 8081 is not allowing me to connect to the webserver? What step am I missing?
Also, I would like to use the chrome on my PC to do a http:xxx.xxx.xxx.xxx:8081/index.html to the Linux box vs. running the browser on Linux box. PC chrome says the Linux ip can't be reached. There is a gateway box required so that probably is the problem. Is there someway to fix the pc chrome so it can find the Linux docker web server via gateway box or must I start chrome on the Linux box all the time? Sort of defeats the point of making the docker webserver in the first place if people have to ssh into the box and start up a browser.
We are using local repositories because of security. These are the rpm I saved in rpms directory for install.
rpms $ ls
deltarpm-3.6-3.el7.x86_64.rpm
httpd-2.4.6-97.el7_9.4.x86_64.rpm
yum-utils-1.1.31-54.el7_8.noarch.rpm
Dockerfile:
# Using RHEL 7 base image and Apache Web server
# Version 1
# Pull the rhel image from the local registry
FROM rhel7_base:latest
USER root
MAINTAINER "Group"
# Copy X dependencies that are not available on the local repository to /home/
COPY rpms/*.rpm /home/
# Update image
# Install all the RPMs using yum. First add the pixie repo to grab the rest of the dependencies
# the subscription and signature checking will be turned off for install
RUN cd /home/ && \
yum-config-manager --add-repo http://xxx.xxx.xxx.xxx/repos/rhel7/yumreposd/redhat.repo && \
cat /etc/yum.repos.d/redhat.repo && \
yum update --disableplugin=subscription-manager --nogpgcheck -y && rm -rf /var/cache/yum && \
yum install --disableplugin=subscription-manager --nogpgcheck *.rpm -y && rm -rf /var/cache/yum && \
rm *.rpm
# Copy test web page directory into container at /var/www/html.
COPY unit_test/ /var/www/html/unit_test/
# Add default Web page and expose port for testing
RUN echo "The Web Server is Running" > /var/www/html/index.html
EXPOSE 80
# Start the service
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["/usr/sbin/httpd"]
I built it this way and then started it and I should have been able to connect on port 8081, but curl fails.
docker build -t="webserver" .
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data -p 8081:80 --name=d_webserver webserver
curl http://localhost/index.html (time out)
curl http://localhost:8081/index.html (time out)
curl http://172.17.0.2:8081/index.html (time out)
curl http://172.17.0.2/index.html (time out)

GUI application via Docker - X11 - "Unable to init server"

I'm trying to run Firefox in a Debian docker image but can't connect to the X11 server.
I'm using the method described here, but changed the base image to the latest Debian. I also changed the user creation method.
Dockerfile
FROM debian:latest
RUN apt-get update && apt-get install -y firefox-esr
RUN useradd --shell /bin/bash --create-home developer && \
usermod -aG sudo developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/firefox
Building the container
docker build -t firefox .
Command to start the container
docker run -ti --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
firefox
ERROR
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0
Operating system
OpenSUSE Leap 15.2
Context
I'm doing the above to understand how to run a GUI app via docker. The aim is to run the latest version of FreeCAD (v19), which is currently broken on OpenSUSE.
docker run --rm \
--net=host \
--env="DISPLAY" \
--volume="$HOME/.Xauthority:/home/developer/.Xauthority:rw" \
firefox
This should work with your Dockerfile!
Couple of points
.Xauthority file also needs to be shared as it holds the cookies and auth sessions for the X server. Hence it has to be read/write too.
If you dont want to do --net=host then you can listen on a TCP port bound to unix socket and forward that to the container.

Upload speed inside Docker Container is limited to 4 Mbit/s

I am new to dockers and I have a docker container running for a set of students with some version-specific compilers and it's a part of a virtual laboratory setup.
Everything is fine with the setup except for the network. I have a 200 Mbps network and this is a speed test done on my phone on the same 200 Mbps network.
I did a speed test on the host machine where the docker container is running. It is running on Ubuntu 20.04 LTS. It is all good.
From inside the docker container running on the above host machine, I did a speed test with the same server ID 9898 and with an auto-selected server too.
We can see that the upload speed inside the docker container is limited to 4 Mbit/s somehow. I cannot find a reason why elsewhere.
I have seen recently that many students experienced connection drops during their attempt to connect to our SSH server. I believe this has something to do with the bandwidth limit.
The docker run command I am using to run this container build is as follows.
$ sudo docker run -p 7766:22 --detach --name lahtp-serv-3 --hostname lahtp-server --mount source=lahtp-3-storage-hdd,target=/var/lahtp-storage main:0.1
I asked a few people who suggested to run the container with net=host which will run the container with the host network instead of the docker network. I would like to know why docker container limits the upload bandwidth and how using the host network instead of the docker network fixes the issue?
Update #1:
I tried to spawn a new Ubuntu 18.04 container with the following command:
$ sudo docker run --net=host -it ubuntu:18.04 /bin/bash
Once inside the container, I installed the following to run the speedtest.
root#lahtp-server:/# apt-get update && apt-get upgrade -y && apt-get install build-essential openssh-server speedtest-cli
Once the installation is done, here is the results.
But adding --net=host does not change the issue. The upload speed is still 4 Mbit/s
How to remove this bandwidth throttling?
Update #2
I spawned a new Ubuntu 14.04 docker container using the following command
$ sudo docker run -it ubuntu:14.04 /bin/bash
Once the container is up, I installed the following
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
And tested inside this container, and here are the results.
NO THROTTLING.
Did the same with Ubuntu 16.04 LTS, No throttling.
$ sudo docker run -it ubuntu:16.04 /bin/bash
And once inside the container
$ apt-get install python3-dev python3-pip
$ pip3 install speedtest-cli
NO THROTTLING.

Running 2 services

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.

Docker client execution

I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.

Resources