I had been running a Windows Service, form my machine (which has full read/write access to a network drive).
The command for ffmpeg was something like this:
-i \filestore\test.avi -b 500000 -s 640x360 -ar 22050 -copyts -y -vcodec libx264 -acodec ac3 -y \filestore\mp4\test.mp4
Running it from cmd works perfectly. Running it from a windows service, from the same machine, would yield a File Not found type error. Updating to the latest ffmpeg stable changed that to "Permission denied".
I am running the service it as 'Local Account'. I was intending to run this on another server, so I need to get a grip on this!
Does running a service on your machine run as a different user to you when you choose 'Local Account'?
Assuming that ffmpeg running from Windows Service needs to access files on a network server, you need to be sure that the service user account has sufficient network permissions.
If both computers are in the same Windows domain, then you can run the services under a domain account and add permissions to access the network share from this account.
Alternatively, you can allow Everyone to access the network share and edit Local Security Policy to enable Network Access: Let Everyone permissions apply to anonymous users.
Related
I've been following these instructions to get Docker installed on a brand new Windows 2019 Server.
As long as I use an administrative account, I can login and run whatever I want:
C:\Windows\system32>docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
But if I try to run the same command from a non-administrator shell I get this error message:
C:\Users\sysUKNG>docker run helloworld docker: error during connect:
Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/containers/create:
open //./pipe/docker_engine: Access is denied. In the default daemon
configuration on Windows, the docker client must be run elevated to
connect. This error may also indicate that the docker daemon is not
running. See 'docker run --help'.
Is Microsoft expecting Docker users to only interact with the Docker Daemon from an elevated account? I guess this kind of makes sense if you assume that the purpose of Docker is to run long-lasting servers. It's logical that you'd want only an administrator to be able to start and stop these kinds of things.
However, I'm trying to run a large number of batch processes which get triggered by a scheduler run from a non-administrative service account. I really don't want my scheduler to have to run elevated.
In Docker for Linux I can make any user I want to have access to to Docker part of the "docker-users" group. Does Windows have an equivalent way to allow any user to have this kind of access? My server has no group with a similar name, but I do have "Hyper-V Administrators", which it says gives the account "Complete and unrestricted access", which is not exactly what I want.
Ideally I want a certain group of users to be able to start and stop a process that runs on Docker for Windows inside a Windows container.
This page suggests that the solution has something to do with opening a TCP port, but I'm using the Windows Server version of Docker. It doesn't have the same control panel that you normally get with Docker Desktop for Windows.
Another page suggests that I can only run Docker commands from an elevated shell? I too want to run some Docker stuff from Jenkins jobs.
create a group "docker-users". Needs to be run after each reboot.
$account="MY-SERVER-NAME\docker-users"
$npipe = "\\.\pipe\docker_engine"
$dInfo = New-Object "System.IO.DirectoryInfo" -ArgumentList $npipe
$dSec = $dInfo.GetAccessControl()
$fullControl =[System.Security.AccessControl.FileSystemRights]::FullControl
$allow =[System.Security.AccessControl.AccessControlType]::Allow
$rule = New-Object "System.Security.AccessControl.FileSystemAccessRule" -ArgumentList $account,$fullControl,$allow
$dSec.AddAccessRule($rule)
$dInfo.SetAccessControl($dSec)
I think I grabbed this idea from here : https://dille.name/blog/2017/11/29/using-the-docker-named-pipe-as-a-non-admin-for-windowscontainers/
I'm trying to set up a Docker container with Selenium that takes a recording of the browser with system audio using ffmpeg. I've got video working using Xvfb. Unfortunately, on the audio side, it seems to be more tricky.
I thought I would set up a virtual pulseaudio sink inside the container, which would allow me to record its monitor:
pacmd load-module module-null-sink sink_name=loopback
pacmd set-default-sink loopback
ffmpeg -f pulse -i loopback.monitor test.wav
This works on my host operating system, but when trying to start the pulseaudio daemon in a container, it fails with the following message:
E: [pulseaudio] module-console-kit.c: Unable to contact D-Bus system bus: org.freedesktop.DBus.Error.FileNotFound: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
This would seem to be related to a freedesktop service called dbus. I've tried installing it and starting its daemon, but I couldn't seem to get it to work properly.
I couldn't find much information on how to proceed from here. What am I missing for pulseaudio? Perhaps there's an easier way to record the system audio inside a container?
My goal is not to record it from the host operating system, but to play the audio inside the browser and record it all inside the same container.
Following solution from here helped me.
Run following commands as root prior to starting PulseAudio:
mkdir -p /var/run/dbus
dbus-uuidgen > /var/lib/dbus/machine-id
dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address
I am using Wireshark 2.4.6 portable (downloaded from their site) and I am trying to configure the remote capture
I am not clear on what I should use in the remote capture command line.
There is a help for this but it refers to the CLI option
https://www.wireshark.org/docs/man-pages/sshdump.html
On the above page they say that using that sshdump CLI is the equivalent of this Unix CLI
ssh remoteuser#remotehost -p 22222 'tcpdump -U -i IFACE -w -' > FILE & $ wireshark FILE w
You just have to configure the SSH settings in that window to get Wireshark to log in and run tcpdump.
You can leave the capture command empty and it will capture on eth0. You'd only want to change it if you have specific requirements (like if you need to specify an interface name).
You might want to set the capture filter to not ((host x.x.x.x) and port 22) (replacing x.x.x.x with your own ip address) so the screen doesn't get flooded with its own SSH traffic.
The following works as a remote capture command:
/usr/bin/dumpcap -i eth0 -q -f 'not port 22' -w -
Replace eth0 with the interface to capture traffic on and not port 22 with the remote capture filter remembering not to capture your own ssh traffic.
This assumes you have configured dumpcap on the remote host to run without requiring sudo.
The other capture fields appear to be ignored when a remote capture command is specified.
Tested with Ubuntu 20.04 (on both ends) with wireshark 3.2.3-1.
The default remote capture command appears to be tcpdump.
I have not found any documentation which explains how the GUI dialog options for remote ssh capture are translated to the remote host command.
Pertaining to sshdump, if you're having trouble finding the command via the commandline, note that it is not in the system path by default on all platforms.
For GNU/Linux (for example in my case, Ubuntu 20.04, wireshark v3.2.3) it was under /usr/lib/x86_64-linux-gnu/wireshark/extcap/.
If this is not the case on your system, it may be handy to ensure that mlocate is installed (sudo apt install mlocate) then use locate sshdump to find its path (you may find some other interesting tools tools in the same location - use man pages or --help to learn more).
As an evolution on can you run GUI apps in a docker container, is it possible to run GUI applications via Docker without other tools like VNC or X11/XQuartz?
In VirtualBox, you could pass the --type gui to launch a headed VM, and this doesn't require installing any additional software. Is anything like that possible via Dockerfile or CLI arguments?
Docker doesn't provide a virtual video device and a place to render that video content in a window like a VM does.
It might be possible to run a container with --privileged and write to the Docker hosts video devices. That would possibly require a second video card that's not in use. The software that Docker runs in the container would also need to support that video device and be able write directly to it or a frame buffer. This limits what could run in the container to something like an X server or Wayland that draws a display to a device.
You could try the following which is worked in my case.
Check the Local machine display and its authentication
[root#localhost ~]# echo $DISPLAY
[root#localhost ~]# xauth list $DISPLAY
localhost:15 MIT-MAGIC-COOKIE-1 cc2764a7313f243a95c22fe21f67d7b1
Copy the above authentication and join your existing container, and add the display autherntication.
[root#apollo-server ~]# docker exec -it -e DISPLAY=$DISPLAY 3a19ab367e79 bash
root#3a19ab367e79:/# xauth add 192.168.10.10:15.0 MIT-MAGIC-COOKIE-1 cc2764a7313f243a95c22fe21f67d7b1
root#3a19ab367e79:/# firefox
I'm getting the same thing every time trying to run busybox either with docker on fedora 20 or running boot2docker in VirtualBox:
[me#localhost ~]$ docker run -it busybox Unable to find image
'busybox:latest' locally Pulling repository busybox FATA[0105] Get
https://index.docker.io/v1/repositories/library/busybox/images: read
tcp 162.242.195.84:443: i/o timeout
I can open https://index.docker.io/v1/repositories/library/busybox/images in a browser and sometimes without using a vpn tunnel so tried to set a proxy in the network settings to the proxy provided by Astrill when using VPN sharing but it will always time out.
Currently in China where there basically is no Internet due to the firewall, npm, git and wget seem to use the Astrill proxy in the terminal (when setting it in network setting of Fedora 20) but somehow I either can't get the docker daemon to use it or something else is wrong.
It seems the answer was not so complicated according to the following documentation (had read it before but thought setting proxy in network settings ui would take care of it)
So added the following to /etc/systemd/system/docker.service.d/http-proxy.conf (after creating the docker.service.d directory and conf file):
[Service]
Environment="HTTP_PROXY=http://localhost:3213/"
Environment="HTTPS_PROXY=http://localhost:3213/"
In the Astrill app (I'm sure other provider application provide something similar) there is an option for vpn sharing which will create a proxy; it can be found under settings => vpn sharing.
For git, npm and wget setting the proxy in the ui (gnome-control-center => Network => network proxy) is enough but when doing a sudo it's better to do a sudo su, set the env and then run the command needing a proxy, for example:
sudo su
export http_proxy=http://localhost:3213/
export ftp_proxy=http://localhost:3213/
export all_proxy=socks://localhost:3213/
export https_proxy=http://localhost:3213/
export no_proxy=localhost,127.0.0.0/8,::1
export NO_PROXY="/var/run/docker.sock"
npm install -g ...
I'd like to update the solution for people who still encounter this issue today
I don't know the details, but when using the wireguard protocol on Astrill, docker build and docker run will use the VPN. If for some reason it doesn't work, try restarting the docker service sudo service docker restart while the VPN is active
Hope it helps, I just wasted one hour trying to figure out why it stopped working