When I run Dataflow job, it takes my small package (setup.py or requirements.txt) and uploads it to run on the Dataflow instances.
But what is actually running on the Dataflow instance? I got a stacktrace recently:
File "/usr/lib/python2.7/httplib.py", line 1073, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1035, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 877, in _send_output
msg += message_body
TypeError: must be str, not unicode
[while running 'write to datastore/Convert to Mutation']
But in theory, if I'm doing str += unicode, it implies I might not be running this Python patch? Can you point to the docker image that these jobs are running, so I can know what version of Python I'm working with, and make sure I'm not barking up the wrong tree here?
The cloud console shows me the instance template, which seems to point to dataflow-dataflow-owned-resource-20170308-rc02, but it seems I don't have permission to look at it. Is the source for it online anywhere?
Haven't tested (and maybe there is an easier way), but something like this might do the trick:
ssh into one of the Dataflow workers from the console
run docker ps to get the container id
run docker inspect <container_id>
grab the image id from the field Image
run docker history --no-trunc <image>
Then you should find what you are after.
Related
I just downloaded this docker image to set up a spark cluster with two worker nodes. Cluster is up and running however I want to submit my scala file to this cluster. I am not able to start spark-shell in this.
When I was using another docker image, I was able to start it using spark-shell.
Can someone please explain if I need to install scala separately in the image or there is a different way to start
UPDATE
Here is the error bash: spark-shell: command not found
bash: spark-shell: command not found
root#a7b0682ff17d:/opt/spark# ls /home/shangupta/Scripts/
ProfileData.json demo.scala queries.scala
TestDataGeneration.sql input.scala
root#a7b0682ff17d:/opt/spark# spark-shell /home/shangupta/Scripts/input.scala
bash: spark-shell: command not found
root#a7b0682ff17d:/opt/spark#
You're getting command not found because PATH isn't correctly established
Use the absolute path /opt/spark/bin/spark-shell
Also, I'd suggest packaging your Scala project as an uber jar to submit unless you have no external dependencies or like to add --packages/--jars manually
I'm currently trying to port my image optimizer application to a NanoServer docker image. One of the tools my image optimizer uses is truepng.exe. (Can be downloaded here: http://x128.ho.ua/clicks/clicks.php?uri=TruePNG_0625.zip)
I simply created a nanoserver container and mounted a folder that contained truepng.exe:
docker run --rm -it -v C:\data:C:\data mcr.microsoft.com/windows/nanoserver:2004-amd64
When I now run truepng.exe I expect some output regarding command line arguments missing:
C:\MyLocalWindowsMachine>truepng
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
...
However when I call this from inside the nanoserver docker container I basically see no output:
C:\data>truepng
C:\data>echo %ERRORLEVEL%
-1073741515
As you can see above, the exit code is set to -1073741515. According to this it typically means that there's a dependency missing.
I then downloaded https://github.com/lucasg/Dependencies to see the dependencies of truepng:
It seems it has some dependencies on 5 DLL's. Looking these up I found that there's apparently something called 'Reverse Forwarders': https://cloudblogs.microsoft.com/windowsserver/2015/11/16/moving-to-nano-server-the-new-deployment-option-in-windows-server-2016/
According to the following post though they should already be included in nanoserver: https://social.technet.microsoft.com/Forums/en-US/5b36a6d3-84c9-4940-8b7a-9e2a38468291/reverse-forwarders-package-in-tp5?forum=NanoServer
After all this investigation I've also been playing around with manually copying over the DLL's from my local machine (system32) to the docker machine without any success (it just kept breaking other things like the copy command which required me to recreate the container). Next to that I've also copied the files from SysWOW64, but this didn't help either.
I'm currently quite stranded on how to proceed further as I'm not even sure if the tool is missing dependencies or if something else is going on. Is there a way to investigate what DLL's are missing once a tool is starting?
Kind regards,
Devedse
Edit 1: Idea from #CherryDT
I tried running gflags (https://social.msdn.microsoft.com/Forums/en-US/f004a7e5-9024-4555-9ada-e692fbc3160d/how-to-start-quotloader-snapsquot?forum=vcgeneral) which gave the following output:
C:\data>"C:\data\gflags.exe" /i TruePNG.exe +sls
Current Registry Settings for TruePNG.exe executable are: 00000000
After this I tried running Dbgview.exe, this however never resulted in a log file being written:
C:\data>"C:\data\DebugView\Dbgview.exe" /v /l debugview-log.txt /g /n
C:\data>
I also started TruePNG.exe again, but again, no log file was written.
I tried querying the EventLogs using a dotnet core application, but this resulted in the following exception:
Unhandled exception. System.InvalidOperationException: Cannot open log Application on computer '.'. This function is not supported on this system.
at System.Diagnostics.EventLogInternal.OpenForRead(String currentMachineName)
at System.Diagnostics.EventLogInternal.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.GetEntryAtNoThrow(Int32 index)
at System.Diagnostics.EventLogEntryCollection.EntriesEnumerator.MoveNext()
at EventLogReaderTest.ConsoleApp.Program.Main(String[] args) in C:\data\EventLogReaderTest.ConsoleApp\Program.cs:line 22
Windows Nano Server is tiny and only supports 64-bit applications, tools, and agents. The missing dependency in this case is the entire x86 emulation layer (WoW64), as TruePNG is a 32-bit application.
Windows Server Core contains WoW64 and other components missing from Nano Server. Use a Windows Server Core image instead.
Example command:
docker run --rm -it -v C:\Temp:C:\Temp mcr.microsoft.com/windows/servercore:2004 C:\Temp\TruePNG.exe
Yields the expected output:
TruePNG 0.6.2.5 : PNG Optimizer
by x128 (2010-2017)
x128#ua.fm
TruePNG {options} files
options:
/f# PNG delta filters 0=None, 1=Sub, 2=Up, 3=Average, 4=Paeth, 5=Mixed
/fe PNG extra filters, overrides /f switch
/i# PNG interlace method 0=None, 1=Adam7 (default input)
/g# PNG gamma 0=Remove, 1=Apply & Remove, 2=Keep (default)
[...]
We have a Dask pipeline in which we basically use a LocalCluster as a process pool. i.e. we start the cluster with LocalCluster(processes=True, threads_per_worker=1). Like so:
dask_cluster = LocalCluster(processes=True, threads_per_worker=1)
with Client(dask_cluster) as dask_client:
exit_code = run_processing(input_file, dask_client, db_state).value
Our workflow and task parallelization works great when run locally. However when we copy the code into a Docker container (centos based), the processing completes and we sometimes get the following error as the container exits:
Traceback (most recent call last):^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/queues.py", line 240, in _feed^M
send_bytes(obj)^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/connection.py", line 200, in send_bytes^M
self._send_bytes(m[offset:offset + size])^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/connection.py", line 404, in _send_bytes^M
self._send(header + buf)^M
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/multiprocessing/connection.py", line 368, in _send^M
n = write(self._handle, buf)^M
BrokenPipeError: [Errno 32] Broken pipe^M
Furthermore, we get multiple instances of this error which makes me think that the error is coming from abandoned worker processes. Our current working theory is that this is related somehow to the "Docker zombie reaping problem" but we don't know how to fix it without starting from a completely different docker image and we don't want to do that.
Is there a way to fix this using only Dask cluster/client cleanup methods?
You should create the cluster as a context manager. It is actually the thing that launches processes, not the Client.
with LocalCluster(...):
...
I use Java S2I image for a container running in Openshift (on premise). My problem is that the output of the image is page-buffered and oc logs ... does not show me the last logs.
I could probably spin up my docker image that would do stdbuf -oL -e0 java ... but I would prefer to stick to the 'official' image (just adding the jar to /deployments). Is there any way to reduce buffering (use line-buffering instead of page-buffering), or flush the output on demand?
EDIT: It seems that I could update deployment config and pass stdbuf in there, but that means that I'd have to compose all the args myself. Ideal solution would be passing --tty do Docker, but I can't see how a custom arguments could be passed that way in Openshift.
In your repo, try creating the file .s2i/bin/run. In it add:
#/bin/bash
exec stdbuf -oL -e0 /usr/local/s2i/run
I always forget where the S2I assemble and run scripts are in the Java S2I image, so you may need to replace /usr/local/s2i with the correct path.
What adding this file does is that it will be run as the startup command instead of the original run script. You can then run the original script with stdbuf. Ensure you use exec so that the sub process replaces the current one, else signals will not be propagated through properly.
Even though this might work, am surprised logging isn't working in an unbuffered mode already. I expect there would be a better way of controlling it through some Java config instead.
So I pulled the snips image from docker hub.
When I run the image, it gives me the error:
standard_init_linux.go:178: exec user process caused "no such file or directory".
Most of the solutions online seem relevant only when the image has been previously built. However, in my case, I've just pulled the image. I haven't done anything with it. When I pull the image again it says:
Status: Image is up to date for snipsdocker/platform:latest
I'm kind of inexperienced, so I have no clue as to what is happening.
Could someone help?
PS: I'm using docker on a RaspberryPi Zero
Note: This answer is applicable if you use Windows.
Background
One of the reasons that you have this issue is that the line endings in the files were converted at some point from Unix format (LF) to Windows format (CR LF).
If such conversion happens to .sh file that will be running inside Docker container, Linux will not recognize Windows format of end of line (EOL) and will treat the whole file as a single line. It will lead to an error like standard_init_linux.go:XXX: exec user process caused "no such file or directory"
Cause
The EOL conversion could happen because one of the following:
your local Git is configured to automatically convert line endings to Windows format (autocrlf = true) when you git pull sources
you saved one of the files in some editor in Windows, so it was saved with CR LF line endings
Solution
As a quick fix you can open the file in Notepad++, go to menu Edit/EOL Conversion/Unix, and then save the file
Another quick fix: use CLI tool dos2unix to convert files from command line
Change git configuration by turning off automatic conversion to Windows EOL format:
git config --global core.autocrlf input
It will change the setting globally, for all repositories on your machine.
You can also set it per repository.
See https://help.github.com/articles/dealing-with-line-endings/ for more details.