I am trying to build electron (master) using the appended script on Ubuntu 22.04. Its throwing the following error (e build doesn't report this error). I am using the latest depot_tools, gn and node.js. Please help:
root#acs-x86-node1-ghatwala-rhel:/electron/src# gn gen out/Release --args="import(\"//electron/build/args/release.gn\")"
ERROR at //electron/BUILD.gn:110:20: Script returned non-zero exit code.
electron_version = exec_script("script/print-version.py",
^----------
Current dir: /electron/src/out/Release/
Command: python3 /electron/src/electron/script/print-version.py
Returned 1 and printed out: 0a>\n/electron/src/electron/script/lib/get-version.js:19\n throw new Error('Failed to get current electron version');\n ^\n\nError: Failed to get current electron version\n at module.exports.getElectronVersion (/electron/src/electron/script/lib/get-version.js:19:11)\n at [eval]:1:37\n at Script.runInThisContext (node:vm:129:12)\n at Object.runInThisContext (node:vm:307:38)\n at node:internal/process/execution:83:21\n at [eval]-wrapper:6:24\n at runScript (node:internal/process/execution:82:62)\n at evalScript (node:internal/process/execution:104:10)\n at node:internal/main/eval_string:50:3\n\nNode.js v19.3.0\n"
File "/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['node', '-p', 'require("./script/lib/get-version").getElectronVersion()']' returned non-zero exit status 1.
See //electron/build/args/all.gn:2:21: which caused the file to be included.
root_extra_deps = [ "//electron" ]
^-----------
mkdir electron && cd electron
gclient config --name "src/electron" --unmanaged https://github.com/electron/electron
gclient sync --with_branch_heads --with_tags --no-history
cd src
export CHROMIUM_BUILDTOOLS_PATH=`pwd`/buildtools
gn gen out/Release --args="import(\"//electron/build/args/release.gn\")"
ninja -C out/Release electron
I am following the instructions here: https://drake.mit.edu/from_source.html. I already ran
./setup/mac/install_prereqs.sh
in my python virtualenv (drake-venv) and it succeeded. I then managed to build and run the inclined plane example with Bazel. But trying to build some of the other examples results in errors involving YAML like this:
(drake-venv) benq:acrobot % bazel build acrobot_input --subcommands --verbose_failures --sandbox_debug
INFO: Analyzed target //examples/acrobot:acrobot_input (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
SUBCOMMAND: # //examples/acrobot:acrobot_input_codegen [action 'Action examples/acrobot/gen/acrobot_input.cc', configuration: f8bba554e4e3784a5a24e83c682b75e9b6104059526c94f74d854527a53436a6, execution platform: #local_config_platform//:host]
(cd /private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/execroot/drake && \
exec env - \
bazel-out/host/bin/tools/vector_gen/lcm_vector_gen '--src=examples/acrobot/acrobot_input_named_vector.yaml' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.cc' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.h' '--include_prefix=drake')
# Configuration: f8bba554e4e3784a5a24e83c682b75e9b6104059526c94f74d854527a53436a6
# Execution platform: #local_config_platform//:host
ERROR: /Users/benq/Documents/drake/examples/acrobot/BUILD.bazel:30:28: Action examples/acrobot/gen/acrobot_input.cc failed: (Exit 1): sandbox-exec failed: error executing command
(cd /private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/sandbox/darwin-sandbox/176/execroot/drake && \
exec env - \
TMPDIR=/var/folders/s0/tfqtn2s54135x0qzt5kxnzs00000gn/T/ \
/usr/bin/sandbox-exec -f /private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/sandbox/darwin-sandbox/176/sandbox.sb /var/tmp/_bazel_benq/install/ebbb2540c6000feeb8873385c487a79c/process-wrapper '--timeout=0' '--kill_delay=15' bazel-out/host/bin/tools/vector_gen/lcm_vector_gen '--src=examples/acrobot/acrobot_input_named_vector.yaml' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.cc' '--out=bazel-out/darwin-opt/bin/examples/acrobot/gen/acrobot_input.h' '--include_prefix=drake')
Traceback (most recent call last):
File "/private/var/tmp/_bazel_benq/a35a7fa5c4830c980dbc52ab349cb0bc/sandbox/darwin-sandbox/176/execroot/drake/bazel-out/host/bin/tools/vector_gen/lcm_vector_gen.runfiles/drake/tools/vector_gen/lcm_vector_gen.py", line 10, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
Target //examples/acrobot:acrobot_input failed to build
INFO: Elapsed time: 1.059s, Critical Path: 0.58s
INFO: 5 processes: 5 internal.
FAILED: Build did NOT complete successfully
But I'm not sure why this is happening considering that importing yaml in Terminal works:
(drake-venv) benq:acrobot % which python
/Users/benq/Documents/drake/drake-venv/bin/python
(drake-venv) benq:acrobot % python --version
Python 3.9.10
(drake-venv) benq:acrobot % python -c 'import yaml'
(drake-venv) benq:acrobot %
I've already tried reinstalling PyYaml but that didn't help.
Relevant Info:
Operating System: macOS Monterey (12.3)
Architecture: x86_64
Python: Python 3.9.10
Bazel version:
% which bazel; bazel version
/usr/local/bin/bazel
Build label: 5.0.0-homebrew
Build target: bazel-out/darwin-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Tue Jan 1 00:00:00 1980 (315532800)
Build timestamp: 315532800
Build timestamp as int: 315532800
Bazel C++ compiler: Apple clang version 13.1.6 (clang-1316.0.21.2)
Git revision: 06dd087b40
The lcm_vector_gen in the error message is a code-generation tool that's run as part of the build.
It's probably not obeying your which python, but instead using the hard-coded /usr/local/bin/python3.9 from https://github.com/RobotLocomotion/drake/blob/master/tools/py_toolchain/interpreter_paths.bzl.
We don't run or test our builds within a virtual environment, so you've stumbled into a novel situation.
Possibly editing that bzl file linked above (interpreter_paths.bzl), to point MACOS_I386_INTERPRETER_PATH to your venv python (/Users/benq/Documents/drake/drake-venv/bin/python), would fix the error.
I've been trying to deploy a pipeline on Google Cloud Dataflow. It's been a quite a challenge so far.
I'm facing an import issue because I realised that ParDo functions require the requirements.txt to be present if not it will say that it can't find the required module. https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/
So I tried fixing the problem by passing in the requirements.txt file, only to be met with a very incomprehensible error message.
import apache_beam as beam
from apache_beam.runners.interactive.interactive_runner import InteractiveRunner
from apache_beam.io.gcp.bigtableio import WriteToBigTable
from apache_beam.runners import DataflowRunner
import apache_beam.runners.interactive.interactive_beam as ib
from apache_beam.options import pipeline_options
from apache_beam.options.pipeline_options import GoogleCloudOptions
import google.auth
from google.cloud.bigtable.row import DirectRow
import datetime
# Setting up the Apache Beam pipeline options.
options = pipeline_options.PipelineOptions(flags=[])
# Sets the project to the default project in your current Google Cloud environment.
_, options.view_as(GoogleCloudOptions).project = google.auth.default()
# Sets the Google Cloud Region in which Cloud Dataflow runs.
options.view_as(GoogleCloudOptions).region = 'us-central1'
# IMPORTANT! Adjust the following to choose a Cloud Storage location.
dataflow_gcs_location = 'gs://tunnel-insight-2-0-dev-291100/dataflow'
# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.
options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location
# Sets the pipeline mode to streaming, so we can stream the data from PubSub.
options.view_as(pipeline_options.StandardOptions).streaming = True
# Sets the requirements.txt file
options.view_as(pipeline_options.SetupOptions).requirements_file = "requirements.txt"
# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.
options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location
# The directory to store the output files of the job.
output_gcs_location = '%s/output' % dataflow_gcs_location
ib.options.recording_duration = '1m'
...
...
pipeline_result = DataflowRunner().run_pipeline(p, options=options)
I've tried to pass requirements using "options.view_as(pipeline_options.SetupOptions).requirements_file = "requirements.txt""
I get this error
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
~/apache-beam-custom/packages/beam/sdks/python/apache_beam/utils/processes.py in check_output(*args, **kwargs)
90 try:
---> 91 out = subprocess.check_output(*args, **kwargs)
92 except OSError:
/opt/conda/lib/python3.7/subprocess.py in check_output(timeout, *popenargs, **kwargs)
410 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 411 **kwargs).stdout
412
/opt/conda/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
511 raise CalledProcessError(retcode, process.args,
--> 512 output=stdout, stderr=stderr)
513 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['/root/apache-beam-custom/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', 'requirements.txt', '--exists-action', 'i', '--no-binary', ':all:']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-12-f018e5c84d08> in <module>
----> 1 pipeline_result = DataflowRunner().run_pipeline(p, options=options)
~/apache-beam-custom/packages/beam/sdks/python/apache_beam/runners/dataflow/dataflow_runner.py in run_pipeline(self, pipeline, options)
491 environments.DockerEnvironment.from_container_image(
492 apiclient.get_container_image_from_options(options),
--> 493 artifacts=environments.python_sdk_dependencies(options)))
494
495 # This has to be performed before pipeline proto is constructed to make sure
~/apache-beam-custom/packages/beam/sdks/python/apache_beam/transforms/environments.py in python_sdk_dependencies(options, tmp_dir)
624 options,
625 tmp_dir,
--> 626 skip_prestaged_dependencies=skip_prestaged_dependencies))
~/apache-beam-custom/packages/beam/sdks/python/apache_beam/runners/portability/stager.py in create_job_resources(options, temp_dir, build_setup_args, populate_requirements_cache, skip_prestaged_dependencies)
178 populate_requirements_cache if populate_requirements_cache else
179 Stager._populate_requirements_cache)(
--> 180 setup_options.requirements_file, requirements_cache_path)
181 for pkg in glob.glob(os.path.join(requirements_cache_path, '*')):
182 resources.append((pkg, os.path.basename(pkg)))
~/apache-beam-custom/packages/beam/sdks/python/apache_beam/utils/retry.py in wrapper(*args, **kwargs)
234 while True:
235 try:
--> 236 return fun(*args, **kwargs)
237 except Exception as exn: # pylint: disable=broad-except
238 if not retry_filter(exn):
~/apache-beam-custom/packages/beam/sdks/python/apache_beam/runners/portability/stager.py in _populate_requirements_cache(requirements_file, cache_dir)
569 ]
570 _LOGGER.info('Executing command: %s', cmd_args)
--> 571 processes.check_output(cmd_args, stderr=processes.STDOUT)
572
573 #staticmethod
~/apache-beam-custom/packages/beam/sdks/python/apache_beam/utils/processes.py in check_output(*args, **kwargs)
97 "Full traceback: {} \n Pip install failed for package: {} \
98 \n Output from execution of subprocess: {}" \
---> 99 .format(traceback.format_exc(), args[0][6], error.output))
100 else:
101 raise RuntimeError("Full trace: {}, \
RuntimeError: Full traceback: Traceback (most recent call last):
File "/root/apache-beam-custom/packages/beam/sdks/python/apache_beam/utils/processes.py", line 91, in check_output
out = subprocess.check_output(*args, **kwargs)
File "/opt/conda/lib/python3.7/subprocess.py", line 411, in check_output
**kwargs).stdout
File "/opt/conda/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['/root/apache-beam-custom/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', 'requirements.txt', '--exists-action', 'i', '--no-binary', ':all:']' returned non-zero exit status 1.
Pip install failed for package: -r
Output from execution of subprocess: b'Obtaining file:///root/apache-beam-custom/packages/beam/sdks/python (from -r requirements.txt (line 3))\n Saved /tmp/dataflow-requirements-cache/apache-beam-2.25.0.zip\nCollecting absl-py==0.11.0\n Downloading absl-py-0.11.0.tar.gz (110 kB)\n Saved /tmp/dataflow-requirements-cache/absl-py-0.11.0.tar.gz\nCollecting argon2-cffi==20.1.0\n Downloading argon2-cffi-20.1.0.tar.gz (1.8 MB)\n Installing build dependencies: started\n Installing build dependencies: finished with status \'error\'\n ERROR: Command errored out with exit status 1:\n command: /root/apache-beam-custom/bin/python /root/apache-beam-custom/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-3iuiaex9/overlay --no-warn-script-location --no-binary :all: --only-binary :none: -i https://pypi.org/simple -- \'setuptools>=40.6.0\' wheel \'cffi>=1.0\'\n cwd: None\n Complete output (85 lines):\n Collecting setuptools>=40.6.0\n Downloading setuptools-51.1.1.tar.gz (2.1 MB)\n Collecting wheel\n Downloading wheel-0.36.2.tar.gz (65 kB)\n Collecting cffi>=1.0\n Downloading cffi-1.14.4.tar.gz (471 kB)\n Collecting pycparser\n Downloading pycparser-2.20.tar.gz (161 kB)\n Skipping wheel build for setuptools, due to binaries being disabled for it.\n Skipping wheel build for wheel, due to binaries being disabled for it.\n Skipping wheel build for cffi, due to binaries being disabled for it.\n Skipping wheel build for pycparser, due to binaries being disabled for it.\n Installing collected packages: setuptools, wheel, pycparser, cffi\n Running setup.py install for setuptools: started\n Running setup.py install for setuptools: finished with status \'done\'\n Running setup.py install for wheel: started\n Running setup.py install for wheel: finished with status \'done\'\n Running setup.py install for pycparser: started\n Running setup.py install for pycparser: finished with status \'done\'\n Running setup.py install for cffi: started\n Running setup.py install for cffi: finished with status \'error\'\n ERROR: Command errored out with exit status 1:\n command: /root/apache-beam-custom/bin/python -u -c \'import sys, setuptools, tokenize; sys.argv[0] = \'"\'"\'/tmp/pip-install-6zs5jguv/cffi/setup.py\'"\'"\'; __file__=\'"\'"\'/tmp/pip-install-6zs5jguv/cffi/setup.py\'"\'"\';f=getattr(tokenize, \'"\'"\'open\'"\'"\', open)(__file__);code=f.read().replace(\'"\'"\'\\r\\n\'"\'"\', \'"\'"\'\\n\'"\'"\');f.close();exec(compile(code, __file__, \'"\'"\'exec\'"\'"\'))\' install --record /tmp/pip-record-z8o69lka/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-3iuiaex9/overlay --compile --install-headers /root/apache-beam-custom/include/site/python3.7/cffi\n cwd: /tmp/pip-install-6zs5jguv/cffi/\n Complete output (56 lines):\n Package libffi was not found in the pkg-config search path.\n Perhaps you should add the directory containing `libffi.pc\'\n to the PKG_CONFIG_PATH environment variable\n No package \'libffi\' found\n Package libffi was not found in the pkg-config search path.\n Perhaps you should add the directory containing `libffi.pc\'\n to the PKG_CONFIG_PATH environment variable\n No package \'libffi\' found\n Package libffi was not found in the pkg-config search path.\n Perhaps you should add the directory containing `libffi.pc\'\n to the PKG_CONFIG_PATH environment variable\n No package \'libffi\' found\n Package libffi was not found in the pkg-config search path.\n Perhaps you should add the directory containing `libffi.pc\'\n to the PKG_CONFIG_PATH environment variable\n No package \'libffi\' found\n Package libffi was not found in the pkg-config search path.\n Perhaps you should add the directory containing `libffi.pc\'\n to the PKG_CONFIG_PATH environment variable\n No package \'libffi\' found\n running install\n running build\n running build_py\n creating build\n creating build/lib.linux-x86_64-3.7\n creating build/lib.linux-x86_64-3.7/cffi\n copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/pkgconfig.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/verifier.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/vengine_gen.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/__init__.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/error.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/api.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/commontypes.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/ffiplatform.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/lock.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/cparser.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/recompiler.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/model.py -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/_cffi_include.h -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/parse_c_type.h -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/_embedding.h -> build/lib.linux-x86_64-3.7/cffi\n copying cffi/_cffi_errors.h -> build/lib.linux-x86_64-3.7/cffi\n running build_ext\n building \'_cffi_backend\' extension\n creating build/temp.linux-x86_64-3.7\n creating build/temp.linux-x86_64-3.7/c\n gcc -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/root/apache-beam-custom/include -I/opt/conda/include/python3.7m -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.7/c/_cffi_backend.o\n c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory\n #include <ffi.h>\n ^~~~~~~\n compilation terminated.\n error: command \'gcc\' failed with exit status 1\n ----------------------------------------\n ERROR: Command errored out with exit status 1: /root/apache-beam-custom/bin/python -u -c \'import sys, setuptools, tokenize; sys.argv[0] = \'"\'"\'/tmp/pip-install-6zs5jguv/cffi/setup.py\'"\'"\'; __file__=\'"\'"\'/tmp/pip-install-6zs5jguv/cffi/setup.py\'"\'"\';f=getattr(tokenize, \'"\'"\'open\'"\'"\', open)(__file__);code=f.read().replace(\'"\'"\'\\r\\n\'"\'"\', \'"\'"\'\\n\'"\'"\');f.close();exec(compile(code, __file__, \'"\'"\'exec\'"\'"\'))\' install --record /tmp/pip-record-z8o69lka/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-3iuiaex9/overlay --compile --install-headers /root/apache-beam-custom/include/site/python3.7/cffi Check the logs for full command output.\n WARNING: You are using pip version 20.1.1; however, version 20.3.3 is available.\n You should consider upgrading via the \'/root/apache-beam-custom/bin/python -m pip install --upgrade pip\' command.\n ----------------------------------------\nERROR: Command errored out with exit status 1: /root/apache-beam-custom/bin/python /root/apache-beam-custom/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-3iuiaex9/overlay --no-warn-script-location --no-binary :all: --only-binary :none: -i https://pypi.org/simple -- \'setuptools>=40.6.0\' wheel \'cffi>=1.0\' Check the logs for full command output.\nWARNING: You are using pip version 20.1.1; however, version 20.3.3 is available.\nYou should consider upgrading via the \'/root/apache-beam-custom/bin/python -m pip install --upgrade pip\' command.\n'
Did I do something wrong?
-------------- EDIT---------------------------------------
Ok, I've got my pipeline to work, but I'm still having a problem with my requirements.txt file which I believe I'm passing in correctly.
My pipeline code:
import apache_beam as beam
from apache_beam.runners.interactive.interactive_runner import InteractiveRunner
from apache_beam.io.gcp.bigtableio import WriteToBigTable
from apache_beam.runners import DataflowRunner
import apache_beam.runners.interactive.interactive_beam as ib
from apache_beam.options import pipeline_options
from apache_beam.options.pipeline_options import GoogleCloudOptions
import google.auth
from google.cloud.bigtable.row import DirectRow
import datetime
# Setting up the Apache Beam pipeline options.
options = pipeline_options.PipelineOptions(flags=[])
# Sets the project to the default project in your current Google Cloud environment.
_, options.view_as(GoogleCloudOptions).project = google.auth.default()
# Sets the Google Cloud Region in which Cloud Dataflow runs.
options.view_as(GoogleCloudOptions).region = 'us-central1'
# IMPORTANT! Adjust the following to choose a Cloud Storage location.
dataflow_gcs_location = ''
# Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.
options.view_as(GoogleCloudOptions).staging_location = '%s/staging' % dataflow_gcs_location
# Sets the pipeline mode to streaming, so we can stream the data from PubSub.
options.view_as(pipeline_options.StandardOptions).streaming = True
# Sets the requirements.txt file
options.view_as(pipeline_options.SetupOptions).requirements_file = "requirements.txt"
# Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.
options.view_as(GoogleCloudOptions).temp_location = '%s/temp' % dataflow_gcs_location
# The directory to store the output files of the job.
output_gcs_location = '%s/output' % dataflow_gcs_location
ib.options.recording_duration = '1m'
# The Google Cloud PubSub topic for this example.
topic = ""
subscription = ""
output_topic = ""
# Info
project_id = ""
bigtable_instance = ""
bigtable_table_id = ""
class CreateRowFn(beam.DoFn):
def process(self,words):
from google.cloud.bigtable.row import DirectRow
import datetime
direct_row = DirectRow(row_key="phone#4c410523#20190501")
direct_row.set_cell(
"stats_summary",
b"os_build",
b"android",
datetime.datetime.now())
return [direct_row]
p = beam.Pipeline(InteractiveRunner(),options=options)
words = p | "read" >> beam.io.ReadFromPubSub(subscription=subscription)
windowed_words = (words | "window" >> beam.WindowInto(beam.window.FixedWindows(10)))
# Writing to BigTable
test = words | beam.ParDo(CreateRowFn()) | WriteToBigTable(
project_id=project_id,
instance_id=bigtable_instance,
table_id=bigtable_table_id)
pipeline_result = DataflowRunner().run_pipeline(p, options=options)
As you can see in "CreateRowFn", I need to import
from google.cloud.bigtable.row import DirectRow
import datetime
Only then this works.
I've passed in requirements.txt as options.view_as(pipeline_options.SetupOptions).requirements_file = "requirements.txt" and I see it on Dataflow console.
If I remove the import statements, I get "in process NameError: name 'DirectRow' is not defined".
Is there anyway to overcome this?
I've found the answer in the FAQs. My mistake was not about how to pass in requirements.txt but how to handle NameErrors
https://cloud.google.com/dataflow/docs/resources/faq
How do I handle NameErrors?
If you're getting a NameError when you execute your pipeline using the Dataflow service but not when you execute locally (i.e. using the DirectRunner), your DoFns may be using values in the global namespace that are not available on the Dataflow worker.
By default, global imports, functions, and variables defined in the main session are not saved during the serialization of a Dataflow job. If, for example, your DoFns are defined in the main file and reference imports and functions in the global namespace, you can set the --save_main_session pipeline option to True. This will cause the state of the global namespace to be pickled and loaded on the Dataflow worker.
Notice that if you have objects in your global namespace that cannot be pickled, you will get a pickling error. If the error is regarding a module that should be available in the Python distribution, you can solve this by importing the module locally, where it is used.
For example, instead of:
import re
…
def myfunc():
# use re module
use:
def myfunc():
import re
# use re module
Alternatively, if your DoFns span multiple files, you should use a different approach to packaging your workflow and managing dependencies.
So the conclusion is:
It is ok to use import statements within the functions
Google Dataflow workers already have the these packages installed: https://cloud.google.com/dataflow/docs/concepts/sdk-worker-dependencies.
If you are running it from cloud composer
In that case you need to add the new Packages to PYPI PACKAGES as shown below.
You can also pass --requirements_file path://requirements.txt as flag in the command while running it.
I prefer to use --setup_file path://setup.py flag instead. The format of setup file is as follows
import setuptools
REQUIRED_PACKAGES = [
'joblib==0.15.1',
'numpy==1.18.5',
'google',
'google-cloud',
'google-cloud-storage',
'cassandra-driver==3.22.0'
]
PACKAGE_NAME = 'my_package'
PACKAGE_VERSION = '0.0.1'
setuptools.setup(
name=PACKAGE_NAME,
version=PACKAGE_VERSION,
description='Searh Rank project',
install_requires=REQUIRED_PACKAGES,
author="Mohd Faisal",
packages=setuptools.find_packages()
)
Use the format below for dataflow script:
from __future__ import absolute_import
import argparse
import logging
import apache_beam as beam
from apache_beam.options.pipeline_options import (GoogleCloudOptions,
PipelineOptions,
SetupOptions,
StandardOptions,
WorkerOptions)
from datetime import date
class Userprocess(beam.DoFn):
def process(self, msg):
yield "OK"
def run(argv=None):
logging.info("Parsing dataflow flags... ")
pipeline_options = PipelineOptions()
pipeline_options.view_as(SetupOptions).save_main_session = True
parser = argparse.ArgumentParser()
parser.add_argument(
'--project',
required=True,
help=(
'project id staging or production '))
parser.add_argument(
'--temp_location',
required=True,
help=(
'temp location'))
parser.add_argument(
'--job_name',
required=True,
help=(
'job name'))
known_args, pipeline_args = parser.parse_known_args(argv)
today = date.today()
logging.info("Processing Date is " + str(today))
google_cloud_options = pipeline_options.view_as(GoogleCloudOptions)
google_cloud_options.project = known_args.project
google_cloud_options.job_name = known_args.job_name
google_cloud_options.temp_location = known_args.temp_location
# pipeline_options.view_as(StandardOptions).runner = known_args.runner
with beam.Pipeline(argv=pipeline_args, options=pipeline_options) as p:
beam.ParDo(Userprocess())
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
logging.info("Starting dataflow daily pipeline ")
try:
run()
except:
pass
Try running the script locally for errors.
Trying to use device farm for web app test automation in mobile devices. Source code tests are written in nightwatch.js.
Below are the nightwatch configuration :
{
custom_assertions_path: 'custom-assertions',
test_settings: {
default: {
selenium: {
start_process: false,
port: 4723,
host: "localhost",
silent: true,
},
android: {
desiredCapabilities = {
browserName: "Chrome",
platformName: "Android",
platformVersion: "7.0",
device: "Android",
deviceName: "Pixel 2",
avd: "Pixel_2_API_24",
}
}
}
Test are running fine locally using emulator
But in device farms getting following error. Nightwatch not able to connect to appium(ECONNREFUSED).
Starting automation...
Done processing feature files.
Done killing webdriver processes.
Running cucumber...
- [0;36mConnecting to localhost on port 4723...
[0m
[1;31m POST /wd/hub/session - ECONNREFUSED
Error: connect ECONNREFUSED 127.0.0.1:4723[0m
⚠ [0;31mError connecting to localhost on port 4723.[0m
VError: a BeforeAll hook errored on slave 0, process exiting: dist/src/cucumber.conf.js:64: An error occurred while retrieving a new session: "Connection refused to 127.0.0.1:4723". If the Webdriver/Selenium service is managed by Nightwatch, check if "start_process" is set to "true".
at _bluebird.default.each (/tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_module
Below are the test log in device farm.
[DEVICEFARM] Setting up your device. This usually takes 2-3 minutes.
[DEVICEFARM] ########### Start executing testspec ###########
[DEVICEFARM] ########### Entering phase install ###########
[DeviceFarm] echo "Navigate to test package directory"
Navigate to test package directory
[DeviceFarm] cd $DEVICEFARM_TEST_PACKAGE_PATH
[DeviceFarm] npm install *.tgz
npm WARN deprecated core-js#2.6.11: core-js#<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js#3.
> core-js#2.6.11 postinstall /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"
[96mThank you for using core-js ([94m https://github.com/zloirock/core-js [96m) for polyfilling JavaScript standard library![0m
[96mThe project needs your help! Please consider supporting of core-js on Open Collective or Patreon: [0m
[96m>[94m https://opencollective.com/core-js [0m
[96m>[94m https://www.patreon.com/zloirock [0m
[96mAlso, the author of core-js ([94m https://github.com/zloirock [96m) is looking for a good job -)[0m
npm WARN saveError ENOENT: no such file or directory, open '/tmp/scratch9MGzGQ.scratch/test-package4sST7I/package.json'
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN enoent ENOENT: no such file or directory, open '/tmp/scratch9MGzGQ.scratch/test-package4sST7I/package.json'
npm WARN bitcentralqa-base-nightwatch-framework#0.1.6 requires a peer of cucumber#^6.0.5 but none is installed. You must install peer dependencies yourself.
npm WARN bitcentralqa-base-nightwatch-framework#0.1.6 requires a peer of nightwatch#^1.3.6 but none is installed. You must install peer dependencies yourself.
npm WARN bitcentralqa-base-nightwatch-framework#0.1.6 requires a peer of nightwatch-api#^3.0.1 but none is installed. You must install peer dependencies yourself.
npm WARN bitcentralqa-base-nightwatch-framework#0.1.6 requires a peer of selenium-server#^3.141.59 but none is installed. You must install peer dependencies yourself.
npm WARN bitcentralqa-base-nightwatch-framework#0.1.6 requires a peer of selenium-server-standalone-jar#^3.141.59 but none is installed. You must install peer dependencies yourself.
npm WARN cucumber-pretty#6.0.0 requires a peer of cucumber#>=6.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN test-package4sST7I No description
npm WARN test-package4sST7I No repository field.
npm WARN test-package4sST7I No README data
npm WARN test-package4sST7I No license field.
+ bitcentralqa-base-nightwatch-framework#0.1.6
added 127 packages from 111 contributors and audited 127 packages in 37.515s
found 1 low severity vulnerability
run `npm audit fix` to fix them, or `npm audit` for details
[DeviceFarm] export APPIUM_VERSION=1.14.2
[DeviceFarm] avm $APPIUM_VERSION
/usr/bin/avm: line 261: appium: command not found
[36m exists[0m : [90m[0m
[DeviceFarm] ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
[DeviceFarm] if [ $(echo $APPIUM_VERSION | cut -d "." -f2) -ge 15 ]; then
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$(echo $DEVICEFARM_DEVICE_UDID | tr -d "-");
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V2;
else
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$DEVICEFARM_DEVICE_UDID;
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V1;
fi
[DEVICEFARM] ########### Entering phase pre_test ###########
[DeviceFarm] if [ $DEVICEFARM_DEVICE_PLATFORM_NAME = "Android" ]; then echo "Start appium server for android"; (appium --log-timestamp --default-capabilities "{\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\", \"app\":\"$DEVICEFARM_APP_PATH\", \"udid\":\"$DEVICEFARM_DEVICE_UDID\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\", \"browserName\":\"Chrome\", \"chromedriverExecutable\":\"$DEVICEFARM_CHROMEDRIVER_EXECUTABLE\"}" >> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &); fi
Start appium server for android
[DeviceFarm] if [ $DEVICEFARM_DEVICE_PLATFORM_NAME = "iOS" ]; then echo "Start appium server for iOS"; (appium --log-timestamp --default-capabilities "{\"usePrebuiltWDA\": true, \"derivedDataPath\":\"$DEVICEFARM_WDA_DERIVED_DATA_PATH\", \"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\", \"app\":\"$DEVICEFARM_APP_PATH\", \"automationName\":\"XCUITest\", \"udid\":\"$DEVICEFARM_DEVICE_UDID_FOR_APPIUM\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\", \"browserName\":\"Safari\"}" >> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &); fi
[DeviceFarm] start_appium_timeout=0; while [ true ]; do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Waiting for appium server to start. Sleeping for 1 second
Appium REST http interface listener started on 0.0.0.0:4723
[DEVICEFARM] ########### Entering phase test ###########
[DeviceFarm] echo "Navigate to test source code"
Navigate to test source code
[DeviceFarm] cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/bitcentralqa-base-nightwatch-framework
[DeviceFarm] echo "Start Appium Node test"
Start Appium Node test
[DeviceFarm] npm install && npm run e2e-build-test -- --env chrome_android_qa --tags accessFuelVideoTesterWebpagePass
> chromedriver#83.0.0 install /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/chromedriver
> node install.js
(node:4046) ExperimentalWarning: The fs.promises API is experimental
Current existing ChromeDriver binary is unavailable, proceeding with download and extraction.
Downloading from file: https://chromedriver.storage.googleapis.com/83.0.4103.39/chromedriver_linux64.zip
Saving to file: /tmp/83.0.4103.39/chromedriver/chromedriver_linux64.zip
Received 1040K...
Received 2080K...
Received 3120K...
Received 4160K...
Received 5099K total.
Extracting zip contents to /tmp/83.0.4103.39/chromedriver.
Copying to target path /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/chromedriver/lib/chromedriver
Fixing file permissions.
Done. ChromeDriver binary available at /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/chromedriver/lib/chromedriver/chromedriver
> edgedriver#4.17134.1 install /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/edgedriver
> node install.js
downloadUrl:
invalid config loglevel="notice"
NOTE: Cannot find Microsoft WebDriver for the current OS: linux x64 3.13.0-139-generic
> iedriver#3.14.2 install /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/iedriver
> node install.js
Downloading 64 bit Windows IE driver server
-----
invalid config loglevel="notice"
Downloading https://selenium-release.storage.googleapis.com/3.14/IEDriverServer_x64_3.14.0.zip
tmp/iedriver64/IEDriverServer_x64_3.14.0.zip extracted to tmp/iedriver64
copying tmp/iedriver64 to /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/iedriver/lib/iedriver64
Success! IEDriverServer binary available at /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/iedriver/lib/iedriver64\IEDriverServer.exe
Downloading 32 bit Windows IE driver server
-----
Downloading https://selenium-release.storage.googleapis.com/3.14/IEDriverServer_Win32_3.14.0.zip
tmp/iedriver/IEDriverServer_Win32_3.14.0.zip extracted to tmp/iedriver
copying tmp/iedriver to /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/iedriver/lib/iedriver
Success! IEDriverServer binary available at /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/iedriver/lib/iedriver\IEDriverServer.exe
> core-js#2.6.11 postinstall /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"
> core-js-pure#3.6.5 postinstall /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/core-js-pure
> node -e "try{require('./postinstall')}catch(e){}"
> ejs#2.7.4 postinstall /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/ejs
> node ./postinstall.js
Thank you for installing [35mEJS[0m: built with the [32mJake[0m JavaScript build tool ([32mhttps://jakejs.com/[0m)
> geckodriver#1.19.1 postinstall /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/geckodriver
> node index.js
Downloading geckodriver... Extracting... Complete.
added 526 packages from 996 contributors and audited 528 packages in 91.837s
found 1 low severity vulnerability
run `npm audit fix` to fix them, or `npm audit` for details
> bitcentralqa-base-nightwatch-framework#0.1.6 e2e-build-test /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework
> npm run clean && npm run build && npm run e2e-test -- "--env" "chrome_android_qa" "--tags" "accessFuelVideoTesterWebpagePass"
> bitcentralqa-base-nightwatch-framework#0.1.6 clean /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework
> rimraf dist/**/*
> bitcentralqa-base-nightwatch-framework#0.1.6 build /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework
> tsc --build ./tsconfig.json
> bitcentralqa-base-nightwatch-framework#0.1.6 e2e-test /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework
> node dist/index.js "--env" "chrome_android_qa" "--tags" "accessFuelVideoTesterWebpagePass"
Starting automation...
Done processing feature files.
Done killing webdriver processes.
Running cucumber...
- [0;36mConnecting to localhost on port 4723...
[0m
[1;31m POST /wd/hub/session - ECONNREFUSED
Error: connect ECONNREFUSED 127.0.0.1:4723[0m
⚠ [0;31mError connecting to localhost on port 4723.[0m
VError: a BeforeAll hook errored on slave 0, process exiting: dist/src/cucumber.conf.js:64: An error occurred while retrieving a new session: "Connection refused to 127.0.0.1:4723". If the Webdriver/Selenium service is managed by Nightwatch, check if "start_process" is set to "true".
at _bluebird.default.each (/tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/cucumber/lib/runtime/parallel/slave.js:143:49)
caused by: Error: An error occurred while retrieving a new session: "Connection refused to 127.0.0.1:4723". If the Webdriver/Selenium service is managed by Nightwatch, check if "start_process" is set to "true".
at Selenium2Protocol.handleSessionCreateError (/tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/nightwatch/lib/transport/transport.js:103:15)
at HttpRequest.request.on.err (/tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/nightwatch/lib/transport/transport.js:158:32)
at HttpRequest.emit (events.js:189:13)
at ClientRequest.originalIssuer.on.args (/tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/node_modules/nightwatch/lib/http/request.js:131:19)
at ClientRequest.emit (events.js:194:15)
at Socket.socketErrorListener (_http_client.js:399:9)
at Socket.emit (events.js:189:13)
at emitErrorNT (internal/streams/destroy.js:82:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
at process.internalTickCallback (internal/process/next_tick.js:72:19)
0 scenarios
0 steps
0m00.000s
Done running cucumber...
Done killing webdriver processes.
Cucumber HTML report /tmp/scratch9MGzGQ.scratch/test-package4sST7I/node_modules/bitcentralqa-base-nightwatch-framework/dist/reports/cucumber-chrome_android_qa-report.html generated successfully.
Finished running automation.
[DEVICEFARM] ########### Entering phase post_test ###########
[DEVICEFARM] ########### Finish executing testspec ###########
[DEVICEFARM] ########### Setting upload permissions ###########
[DEVICEFARM] Tearing down your device. Your tests report will come shortly.
Is there any missing config for nightwatch and device farm
Can you try re-running it with start_process: True and verify if port wasn't already in use.
Issue was due to piece of code killing chrome webdriver processes before start of a test run.The killing of webdrivers do not cause any problem locally with emulator , so while running tests in device farm killing of webdriver processes had to be disabled.