Watson Studio "Spark Environment" - how to increase `spark.driver.maxResultSize`? - watson-studio

I'm running a spark job where I'm reading, manipulating and merging a lot of txt files into a single file, but I'm hitting this issue:
Py4JJavaError: An error occurred while calling o8483.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 838 tasks (1025.6 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)
Is it possible to increase the size of spark.driver.maxResultSize?
Note: this question is about the WS Spark “Environments” NOT about Analytics Engine.

You can increase the default value through the Ambari console if you are using "Analytics Engine" spark cluster instance. You can get the link and credentials to the Ambari console from IAE instance in console.bluemix.net. From Ambari console, add a new property in
Spark2 -> "Custom spark2-defaults" -> Add property -> spark.driver.maxResultSize = 2GB
Make sure the spark.driver.maxResultSize values is less than driver memory which is set in
Spark2 -> "Advanced spark2-env" -> content -> SPARK_DRIVER_MEMORY
Another suggestion if you are just trying to create a single CSV file and don't want to change spark conf values since u don't know how large the final file would be, is to use a function like below which uses hdfs getmerge function to create a single csv file just like pandas.
def writeSparkDFAsCSV_HDFS(spark_df, file_location,file_name, csv_sep=',', csv_quote='"'):
"""
It can be used to write large spark dataframe as a csv file without running
into memory issues while converting to pandas dataframe.
It first writes the spark df to a temp hdfs location and uses getmerge to create
a single file. After adding a header, the merged file is moved to hdfs.
Args:
spark_df (spark dataframe) : Data object to be written to file.
file_location (String) : Directory location of the file.
file_name (String) : Name of file to write to.
csv_sep (character) : Field separator to use in csv file
csv_quote (character) : Quote character to use in csv file
"""
# define temp and final paths
file_path= os.path.join(file_location,file_name)
temp_file_location = tempfile.NamedTemporaryFile().name
temp_file_path = os.path.join(temp_file_location,file_name)
print("Create directories")
#create directories if not exist in both local and hdfs
!mkdir $temp_file_location
!hdfs dfs -mkdir $file_location
!hdfs dfs -mkdir $temp_file_location
# write to temp hdfs location
print("Write to temp hdfs location : {}".format("hdfs://" + temp_file_path))
spark_df.write.csv("hdfs://" + temp_file_path, sep=csv_sep, quote=csv_quote)
# merge file from hadoop to local
print("Merge and put file at {}".format(temp_file_path))
!hdfs dfs -getmerge $temp_file_path $temp_file_path
# Add header to the merged file
header = ",".join(spark_df.columns)
!rm $temp_file_location/.*crc
line_prepender(temp_file_path, header)
#move the final file to hdfs
!hdfs dfs -put -f $temp_file_path $file_path
#cleanup temp locations
print("Cleanup..")
!rm -rf $temp_file_location
!hdfs dfs -rm -r $temp_file_location
print("Done!")

Related

Apache Beam dataflow on GCS: ReadFromText from multiple path

I was wondering if it was possible to use the ReadFromText PTransform passing it multiple path.
My PTranform expand method is:
def expand(self, pcoll):
dataset = (
pcoll
| "Read Dataset from text file" >> beam.io.ReadFromText(self._source)
And source right now is a string with a path with a blob pattern
self._source="gs://bucket1/folder/*
From the documentation it says:
Args:
file_pattern (str): The file path to read from as a local file path or a
GCS ``gs://`` path. The path can contain glob characters
(``*``, ``?``, and ``[...]`` sets).
But even if it works greatly if I use gs://folder/*.gz (I have multiple files under a path) I can't seem to make it work if I have different path (or, in my case, in different buckets).
I tried with the command ls with something like:
gsutils ls gs://{bucket1/folder,bucket2/folder}/*
But if I try it with the beam pipeline it doesn't work and gives me
ERROR: (gcloud.dataflow.flex-template.run) unrecognized arguments:
Is there a way to make it work ?
As you explained in your comment, you can solve it with a for loop on the Beam Pipeline, example :
bucket_paths = [
"gs://bucket/folder/file*.txt",
"gs://bucket2/folder/file*.txt"
]
with beam.Pipeline(options=PipelineOptions()) as p:
for i, bucket_path in enumerate(bucket_paths):
(p
| f"Read Dataset from text file {i}" >> beam.io.ReadFromText(bucket_path)
....
)

How to process video with OpenCV2 in Google Cloud function?

Starting point:
There is video called myVideo.mp4 in a folder (/1_original_videos) in a Bucket called myBucket in Google Cloud Storage.
myBucket
-->/1_original_video
-->myVideo.mp4
Goal:
The goal is to take this video, split it into chunks in a Cloud Function myCloudFunction and save the chunks in a subfolder called chunks in myBucket. The part of dividing into chunks is not a problem. The problem is reading the video.
myCloudFunction must be triggered with an HTTP trigger.
_______________
myVideo.mp4 ---->|myCloudFunction|----> chunk0.mp4, chunk1.mp4, chunk2.mp4, ... , chunkN-1.mp4
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
^
|
|
|
HTTP trigger
If the video were on my local computer, in order to read it, the following would be enough:
import cv2
cap = cv2.VideoCapture("/some/path/in/my/local/computer/myVideo.mp4")
Attempts:
Path with authenticated URL:
import cv2
cap = cv2.VideoCapture("https://storage.cloud.google.com/myBucket/1_original_videos/myVideo.mp4")
When testing this approach, this is the resulting message (see complete code below):
"File Cannot be Opened"
Complete code:
import cv2
def video2chunks(request):
# Request:
REQUEST_JSON = request.get_json()
#If the HTTP contains a key called "start" (e.g. "{"start":"whatever"}"):
if REQUEST_JSON and 'start' in REQUEST_JSON:
try:
# Create VideoCapture object:
cap = cv2.VideoCapture("https://storage.cloud.google.com/myBucket/1_original_videos/myVideo.mp4")
# If no VideoCapture object is created:
if not cap.isOpened():
message = "File Cannot be Opened"
# If a Videocapture object is created, compute some of the video parameters:
else:
fps = int(cap.get(cv2.CAP_PROP_FPS))
size = (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)))
fourcc = int(cv2.VideoWriter_fourcc('X','V','I','D')) # XVID codecs
message = "Video downloaded successfully. Some params are: "
message += "FPS= " + str(fps) + " | size= " + str(size)
except Exception as e:
message = str(e)
else:
message = "You did not provide a key called start "
return message
I have been trying to find examples or a better way to do this in a Cloud Function but so far have been unsuccessful. Any alternatives would also be very much appreciated.
I'm not aware whether the cv2 library supports reading directly from Cloud Storage in some way. Nonetheless as Christoph points out you may download the file, process it and upload the results. The code will be essentially the same as running locally.
One thing to note is that Cloud Functions offer a temporal directory which is the way I chose to store the image. However it's important to know that any file stored there is actually consuming part of your function RAM, so the allocated function memory should be sized accordingly. Also you may notice the temp files are deleted before exiting the function, this is just a best practice in Cloud Functions.
import cv2
import os
from google.cloud import storage
def myfunc(request):
# Substitute the variables below for whatever suits your needs
# BUCKET_ID :: The bucket ID
# INPUT_IMAGE_GCS :: Path to GCS object
# OUTPUT_IMAGE_PATH :: Path to save the resulting image/video
# Read video and save to /tmp directory
bucket = storage.Client().bucket(BUCKET_ID)
blob = bucket.blob(INPUT_IMAGE_GCS)
blob.download_to_filename('/tmp/video.mp4')
# Video processing stuff
vidcap = cv2.VideoCapture('/tmp/video.mp4')
success, image = vidcap.read()
cv2.imwrite("/tmp/frame.jpg", image)
# Save results to GCS
img_blob = bucket.blob('potato/frame.jpg')
img_blob.upload_from_filename(OUTPUT_IMAGE_PATH)
# Delete tmp resources to free memory
os.remove('/tmp/video.mp4')
os.remove('/tmp/frame.jpg')
return '', 200

Neo4j import tool fails and doesn't show why

I have created 15.4 GB of csv files that I would like to import into fresh new Neo4j graph.db.
After executing neo4j-admin import --delimiter="|" --array-delimiter="&" --nodes="processes.*" command (I have 17229 csv files, that are named "processes_someHash.csv") I get this particular output:
..../pathWithCsvFiles: neo4j-admin import --delimiter="|" --array-delimiter="&" --nodes="processes.*"
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
For input string: "10059167292802359779483"
usage: neo4j-admin import [--mode=csv] [--database=<name>]
[--additional-config=<config-file-path>]
[--report-file=<filename>]
[--nodes[:Label1:Label2]=<"file1,file2,...">]
[--relationships[:RELATIONSHIP_TYPE]=<"file1,file2,...">]
[--id-type=<STRING|INTEGER|ACTUAL>]
[--input-encoding=<character-set>]
[--ignore-extra-columns[=<true|false>]]
[--ignore-duplicate-nodes[=<true|false>]]
[--ignore-missing-nodes[=<true|false>]]
[--multiline-fields[=<true|false>]]
[--delimiter=<delimiter-character>]
[--array-delimiter=<array-delimiter-character>]
[--quote=<quotation-character>]
[--max-memory=<max-memory-that-importer-can-use>]
[--f=<File containing all arguments to this import>]
[--high-io=<true/false>]
usage: neo4j-admin import --mode=database [--database=<name>]
[--additional-config=<config-file-path>]
[--from=<source-directory>]
environment variables:
NEO4J_CONF Path to directory which contains neo4j.conf.
NEO4J_DEBUG Set to anything to enable debug output.
NEO4J_HOME Neo4j home directory.
HEAP_SIZE Set JVM maximum heap size during command execution.
Takes a number and a unit, for example 512m.
Import a collection of CSV files with --mode=csv (default), or a database from a
pre-3.0 installation with --mode=database.
options:
--database=<name>
Name of database. [default:graph.db]
--additional-config=<config-file-path>
Configuration file to supply additional configuration in. [default:]
--mode=<database|csv>
Import a collection of CSV files or a pre-3.0 installation. [default:csv]
--from=<source-directory>
The location of the pre-3.0 database (e.g. <neo4j-root>/data/graph.db).
[default:]
--report-file=<filename>
File in which to store the report of the csv-i
... and more of a manual
What does the For input string: "10059167292802359779483" mean?
Have you checked the headers in your CSV files? That's been a problem for me when importing previously.
Any chance your delimiter character is also appearing in data values?
well, I tested neo4j importing with more compact dataset and it worked fine (when there was problem with delimiter for example, then the error message showed me what was the specific problem). I optimized my program for creating these csv files based on this low dataset and used it to make the mentioned bigger csv files, which doesn't work.

Local file for Google Speech

I followed this page:
https://cloud.google.com/speech/docs/getting-started
and I could reach the end of it without problems.
In the example though, the file
'uri':'gs://cloud-samples-tests/speech/brooklyn.flac'
is processed.
What if I want to process a local file? In case this is not possible, how can I upload my .flac via command line?
Thanks
You're now able to process a local file by specifying a local path instead of the google storage one:
gcloud ml speech recognize '/Users/xxx/cloud-samples-tests/speech/brooklyn.flac' \ --language-code='en-US'
You can send this command by using the gcloud tool (https://cloud.google.com/speech-to-text/docs/quickstart-gcloud).
Solution found:
I created my own bucket (my_bucket_test), and I upload the file there via:
gsutil cp speech.flac gs://my_bucket_test
If you don't want to create a bucket (costs extra time and money) - you can stream the local files. The following code is copied directly from the Google cloud docs:
def transcribe_streaming(stream_file):
"""Streams transcription of the given audio file."""
import io
from google.cloud import speech
client = speech.SpeechClient()
with io.open(stream_file, "rb") as audio_file:
content = audio_file.read()
# In practice, stream should be a generator yielding chunks of audio data.
stream = [content]
requests = (
speech.StreamingRecognizeRequest(audio_content=chunk) for chunk in stream
)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code="en-US",
)
streaming_config = speech.StreamingRecognitionConfig(config=config)
# streaming_recognize returns a generator.
responses = client.streaming_recognize(
config=streaming_config,
requests=requests,
)
for response in responses:
# Once the transcription has settled, the first result will contain the
# is_final result. The other results will be for subsequent portions of
# the audio.
for result in response.results:
print("Finished: {}".format(result.is_final))
print("Stability: {}".format(result.stability))
alternatives = result.alternatives
# The alternatives are ordered from most likely to least.
for alternative in alternatives:
print("Confidence: {}".format(alternative.confidence))
print(u"Transcript: {}".format(alternative.transcript))
Here is the URL incase the package's function names are edited over time: https://cloud.google.com/speech-to-text/docs/streaming-recognize

Loading whole file from source into HDFS in flume

How to get source filename as it is from source into HDFS in flume?
Ex: source file /usr/sample.txt hdfs: /tmp/sample.txt not like flumeevetns.23343.tmp
how to stop appending timestamp and .tmp?Ex:flumeevent.12334343.tmp(Here 12334343.tmp) I dont want it.
How to read as a whole file from Flume?
How to read csv file in Flume?
You need to add a parameter for spooldir which adds a header which is false by
default.
agentname.sources.sourcename.fileHeader=true
It will keep the same name of file and push into HDFS.

Resources