Creating application version archive "app-8dfd-161111_001943".
Uploading: [##################################################] 100% Done...
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
ERROR: [Instance: i-97f2b48f] Command failed on instance. Return code: 1 Output: (TRUNCATED)...b:1:in `<top (required)>'
/var/app/ondeck/config/environment.rb:5:in `<top (required)>'
/opt/rubies/ruby-2.3.1/bin/bundle:23:in `load'
/opt/rubies/ruby-2.3.1/bin/bundle:23:in `<main>'
Tasks: TOP => environment
(See full trace by running task with --trace).
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/11_asset_compilation.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-97f2b48f'. Aborting the operation.
ERROR: Failed to deploy application.
Traceback (most recent call last):
File "/usr/local/bin/eb", line 11, in
load_entry_point('awsebcli==3.8.3', 'console_scripts', 'eb')()
File "/usr/local/lib/python2.7/dist-packages/ebcli/core/ebcore.py", line 150, in main
app.run()
File "/usr/local/lib/python2.7/dist-packages/cement/core/foundation.py", line 797, in run
return_val = self.controller._dispatch()
File "/usr/local/lib/python2.7/dist-packages/cement/core/controller.py", line 472, in _dispatch
return func()
File "/usr/local/lib/python2.7/dist-packages/cement/core/controller.py", line 478, in _dispatch
return func()
File "/usr/local/lib/python2.7/dist-packages/ebcli/core/abstractcontroller.py", line 57, in default
self.do_command()
File "/usr/local/lib/python2.7/dist-packages/ebcli/controllers/deploy.py", line 94, in do_command
staged=self.staged, timeout=self.timeout, source=self.source)
File "/usr/local/lib/python2.7/dist-packages/ebcli/operations/deployops.py", line 45, in deploy
can_abort=True)
File "/usr/local/lib/python2.7/dist-packages/ebcli/operations/commonops.py", line 91, in wait_for_success_events
if _is_success_string(event.message):
File "/usr/local/lib/python2.7/dist-packages/ebcli/operations/commonops.py", line 264, in _is_success_string
raise ServiceError(message)
ebcli.objects.exceptions.ServiceError: Failed to deploy application.
This error appears when I deploy my application to eb (eb deploy). How do I deploy this successfully?
It looks like your application failed to deploy onto the EC2 instance. You should be able to get a detailed log of what went wrong by downloading the logs and checking them.
You can get the logs with the EB CLI like so:
eb logs --all
The log file that will have the deployment logs will be the /var/log/eb-activity.log which you can find here after running the above command:
/PROJECT-ROOT/.elasticbeanstalk/logs/latest/i-xxxxxxx/var/log/eb-activity.log
The error logs of why your instance failed to delete should be in this log file.
Related
I have a docker container running PySpark, hadoop and all the required dependecies. I am using spark-submit to query the minio and I want to write the output dataframe to the file. Reading the file works but writing does not. If I execute python in that container and try to create file at the same path, it works.
Am I missing some spark configuration?
This is the error I get:
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1109, in save
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o38.save
: java.net.ConnectException: Call From 10d3463d04ce/10.0.1.132 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Relevant code:
spark = SparkSession.builder.getOrCreate()
spark_context = spark.sparkContext
spark_context._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'minio')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.secret.key', AWS_SECRET_ACCESS_KEY
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.path.style.access', 'true')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.endpoint', AWS_S3_ENDPOINT)
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.connection.ssl.enabled', 'false'
)
df = spark.sql(query)
df.show() # this works perfectly fine
df.coalesce(1).write.format('json').save(output_path) # here I get the error
Solution was to prepend file:// to output_path.
I am trying to upgrade my Gitlab CE which is running in Docker container. I am upgrading from version 11.9.1 to 14.2.1. I am also following the required upgrade path from official Gitlab documentation which is:
11.9.1->11.11.8->12.0.12->12.1.17->12.10.14->13.0.4->13.1.11->13.8.8->13.12.9->14.0.7->14.2.1
The last version that works is 14.0.7, I am also able to run latest 14.1.x version, but during migration to 14.2.x the following error appears, some migrations does not work.
There was an error running gitlab-ctl reconfigure:
rails_migration[gitlab-rails] (gitlab::database_migrations line 51) had an error: Mixlib::ShellOut::ShellCommandFailed: bash[migrate gitlab-rails database] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/resources/rails_migration.rb line 16) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of "bash" "/tmp/chef-script20210903-28-4vm0c2" ----
STDOUT: rake aborted!
StandardError: An error has occurred, all later migrations canceled:
Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active': {:job_class_name=>"CopyColumnUsingBackgroundMigrationJob", :table_name=>"ci_job_artifacts", :column_name=>"id", :job_arguments=>[["id", "job_id"], ["id_convert_to_bigint", "job_id_convert_to_bigint"]]}
Finalize it manualy by running
sudo gitlab-rake gitlab:background_migrations:finalize[CopyColumnUsingBackgroundMigrationJob,ci_job_artifacts,id,'[["id"\, "job_id"]\, ["id_convert_to_bigint"\, "job_id_convert_to_bigint"]]']
For more information, check the documentation
https://docs.gitlab.com/ee/user/admin_area/monitoring/background_migrations.html#database-migrations-failing-because-of-batched-background-migration-not-finished
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/database/migration_helpers.rb:1129:in `ensure_batched_background_migration_is_finished'
/opt/gitlab/embedded/service/gitlab-rails/db/post_migrate/20210706212710_finalize_ci_job_artifacts_bigint_conversion.rb:14:in `up'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:61:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Caused by:
Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active': {:job_class_name=>"CopyColumnUsingBackgroundMigrationJob", :table_name=>"ci_job_artifacts", :column_name=>"id", :job_arguments=>[["id", "job_id"], ["id_convert_to_bigint", "job_id_convert_to_bigint"]]}
Finalize it manualy by running
sudo gitlab-rake gitlab:background_migrations:finalize[CopyColumnUsingBackgroundMigrationJob,ci_job_artifacts,id,'[["id"\, "job_id"]\, ["id_convert_to_bigint"\, "job_id_convert_to_bigint"]]']
For more information, check the documentation
https://docs.gitlab.com/ee/user/admin_area/monitoring/background_migrations.html#database-migrations-failing-because-of-batched-background-migration-not-finished
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/database/migration_helpers.rb:1129:in `ensure_batched_background_migration_is_finished'
/opt/gitlab/embedded/service/gitlab-rails/db/post_migrate/20210706212710_finalize_ci_job_artifacts_bigint_conversion.rb:14:in `up'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:61:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => db:migrate
(See full trace by running task with --trace)
== 20210706212710 FinalizeCiJobArtifactsBigintConversion: migrating ===========
STDERR:
---- End output of "bash" "/tmp/chef-script20210903-28-4vm0c2" ----
Ran "bash" "/tmp/chef-script20210903-28-4vm0c2" returned 1
Running handlers:
Running handlers complete
Chef Infra Client failed. 11 resources updated in 16 seconds
Thank you for using GitLab Docker Image!
Current version: gitlab-ce=14.2.0-ce.0
Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file
And restart this container to reload settings.
To do it use docker exec:
docker exec -it gitlab editor /etc/gitlab/gitlab.rb
docker restart gitlab
For a comprehensive list of configuration options please see the Omnibus GitLab readme
https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md
If this container fails to start due to permission problems try to fix it by executing:
docker exec -it gitlab update-permissions
docker restart gitlab
Cleaning stale PIDs & sockets
Preparing services...
Starting services...
Configuring GitLab...
/opt/gitlab/embedded/bin/runsvdir-start: line 24: ulimit: pending signals: cannot modify limit: Operation not permitted
/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system
I have tried executing migrations by hand and all other fixes that logs propose, but none of them worked.
I use Ubuntu 20.04 LTS and Docker version 20.10.7, build 20.10.7-0ubuntu1~20.04.1
Ok, turned out that I had to update to 14.0.5 firstly and wait for some background migration to complete (you can see them in menu->admin->monitoring->background migrations).
I have a Windows server 2012 R2 and I downloaded and installed Ruby Hosting Packing from Microsoft Web Platform installer. When I try to run an existing website, I get the following error:
Error
Helicon Zoo module has caught up an error. Please see the details below.
Worker Status
The process was created
Windows error
The pipe has been ended. (ERROR CODE: 109)
Internal module error
message: Application backend read Error. type: ZooException file: App\Jobs\JobBase.cpp line: 531 version: 3.1.98.538
STDERR
[tid-5360004] Couldn't run bundler/setup: cannot load such file -- bundler/setup (String) [tid-5360004] cannot load such file -- rack (LoadError) C:/Zoo/Workers/ruby/lib/app.rb:84:in eval' C:/Ruby19/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:inrequire' (eval):1:in assure_rack' C:/Zoo/Workers/ruby/lib/app.rb:84:ineval' C:/Zoo/Workers/ruby/lib/app.rb:84:in assure_rack' C:/Zoo/Workers/ruby/lib/app.rb:23:inbuild_app' C:/Zoo/Workers/ruby/lib/app.rb:16:in initialize' C:/Zoo/Workers/ruby/lib/worker.rb:4:innew' C:/Zoo/Workers/ruby/lib/worker.rb:4:in initialize' C:/Zoo/Workers/ruby/zoorack.rb:30:innew' C:/Zoo/Workers/ruby/zoorack.rb:30:in <module:Zack>' C:/Zoo/Workers/ruby/zoorack.rb:12:in'
Any idea on how to fix this?
Thanks.
I am using Python 3.7 with Appium 1.15.1 on real Android Device.
When my script finish the job, I close the driver with these lines:
if p_driver:
p_driver.close()
but I get this error ouput:
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 688, in close
self.execute(Command.CLOSE)
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\Nino\AppData\Roaming\Python\Python37\site-packages\appium\webdriver\errorhandler.py", line 29, in check_response
raise wde
File "C:\Users\Nino\AppData\Roaming\Python\Python37\site-packages\appium\webdriver\errorhandler.py", line 24, in check_response
super(MobileErrorHandler, self).check_response(response)
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: An unknown server-side error occurred while processing the command. Original error: Could not proxy. Proxy error: Could not proxy command to remote server. Original error: 404 - undefined
I would like to understand what I am doing wrong?
What is the way to close properly the driver?
Can you help me please?
Step1: you should get the appium session dictionary first:
session_instance = webdriver.Remote(str(url), caps_dic)
where url is your appium server url. something like: "http://127.0.0.1:4723/wd/hub"
and caps_dic is a dictionary of all your desired capabilities
Step2: you run the quit() method on session:
session_instance[session].quit()
So the whole snippet is:
session_instance = webdriver.Remote(str(url), caps_dic)
session_instance[session].quit()
My Dataflow jobs fail with the following error:
INFO:root:2018-10-15T18:55:37.417Z: JOB_MESSAGE_ERROR: Workflow failed.
Causes: S17:fold2/Write/WriteImpl/WindowInto(WindowIntoFn)+write instances fold2/Write/WriteImpl/GroupByKey/Reify+write instances fold2/Write/WriteImpl/GroupByKey/Write failed.,
A work item was attempted 4 times without success.
Each time the worker eventually lost contact with the service. The work item was attempted on:
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd,
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd,
yuri-nine-gag-recommender-10151140-3kmq-harness-41dd,
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd
Digging into the logs shows only one error:
An exception was raised when trying to execute the workitem 6479210647275353150 :
Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 642, in do_work work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 158, in execute op.finish()
File "dataflow_worker/shuffle_operations.py", line 144, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish def finish(self):
File "dataflow_worker/shuffle_operations.py", line 145, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish with self.scoped_finish_state:
File "dataflow_worker/shuffle_operations.py", line 147, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish self.writer.__exit__(None, None, None)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/shuffle.py", line 599, in __exit__ self.writer.Close()
File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 202, in shuffle_client.PyShuffleWriter.Close IOError: Shuffle close failed: FAILED_PRECONDITION: Precondition check failed.
Any ideas?
I finally figured out the problem by removing various pieces for code, printing tons of logs and running the job again. It turned out that I had a regular expression that blew up for one particular entry. Unfortunately, Dataflow logs were not helpful at all.