Using LocalContext.current results in app crash telling me:
java.lang.IllegalStateException: No default context
at androidx.glance.CompositionLocalsKt$LocalContext$1.invoke(CompositionLocals.kt:35)
at androidx.glance.CompositionLocalsKt$LocalContext$1.invoke(CompositionLocals.kt:35)
at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74)
at androidx.compose.runtime.LazyValueHolder.getCurrent(ValueHolders.kt:29)
at androidx.compose.runtime.LazyValueHolder.getValue(ValueHolders.kt:31)
at androidx.compose.runtime.ComposerImpl.resolveCompositionLocal(Composer.kt:1776)
at androidx.compose.runtime.ComposerImpl.consume(Composer.kt:1746)
at com.bqubique.quran_randomayah.view.VerseCardKt.ButtonTile(VerseCard.kt:282)
at com.bqubique.quran_randomayah.view.ComposableSingletons$VerseCardKt$lambda-2$1.invoke(VerseCard.kt:79)
at com.bqubique.quran_randomayah.view.ComposableSingletons$VerseCardKt$lambda-2$1.invoke(VerseCard.kt:79)
at androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:116)
at androidx.compose.runtime.internal.ComposableLambdaImpl.invoke(ComposableLambda.jvm.kt:34)
at
...
Versions used:
compose_version = '1.0.5'
wear_compose_version = '1.0.0-alpha15'
I have tried calling LocalContext.current in any kind of #Composable, even in MainActivity.kt inside setContent{...}
I guess the imports failed me. I was importing:
import androidx.glance.LocalContext
Instead of :
import androidx.compose.ui.platform.LocalContext
Related
I have a kivy gui I've made on one machine that has python3.7, but on another machine running python 3.9 I'm getting errors on the exact same code.
The code:
from kivy.app import App
from kivy.graphics import *
from kivy.config import Config
from kivy.core.window import Window
Window.top = 30
Window.left = 10
screen_width = 700
screen_height = 775
Window.size = (screen_width, screen_height)
print(f"new window size: {Window.size}")
Config.write()
The error occurs on the first Window.size line.
The error:
AttributeError: 'NoneType' object has no attribute 'top'
I've tried to find if there's a compatibility issue between 3.7 and 3.9 but I haven't found anything in the documentation that hints to that. Is there an install I'm missin?
You certainly have an error before that indicating unable to find any window provider at all and before that, errors indicating what went wrong with the window providers available to your platforms. Otherwise Window wouldn't be None.
Please try to have the sdl2 window provider working, or give the error preventing that from happening, to get help on that.
Using iforest as described here: https://github.com/titicaca/spark-iforest
But model.save() is throwing exception:
Exception:
scala.NotImplementedError: The default jsonEncode only supports string, vector and matrix. org.apache.spark.ml.param.Param must override jsonEncode for java.lang.Double.
Followed the code snippet mentioned under "Python API" section on mentioned git page.
from pyspark.ml.feature import VectorAssembler
import os
import tempfile
from pyspark_iforest.ml.iforest import *
col_1:integer
col_2:integer
col_3:integer
assembler = VectorAssembler(inputCols=in_cols, outputCol="features")
featurized = assembler.transform(df)
iforest = IForest(contamination=0.5, maxDepth=2)
model=iforest.fit(df)
model.save("model_path")
model.save() should be able to save model files.
Below is the output dataframe I'm getting after executing model.transform(df):
col_1:integer
col_2:integer
col_3:integer
features:udt
anomalyScore:double
prediction:double
I have just fixed this issue. It was caused by an incorrect param type. You can checkout the latest codes in the master branch, and try it again.
A PicklingError is raised when I run my data pipeline remotely: the data pipeline has been written using the Beam SDK for Python and I am running it on top of Google Cloud Dataflow. The pipeline works fine when I run it locally.
The following code generates the PicklingError: this ought to reproduce the problem
import apache_beam as beam
from apache_beam.transforms import pvalue
from apache_beam.io.fileio import _CompressionType
from apache_beam.utils.options import PipelineOptions
from apache_beam.utils.options import GoogleCloudOptions
from apache_beam.utils.options import SetupOptions
from apache_beam.utils.options import StandardOptions
if __name__ == "__main__":
pipeline_options = PipelineOptions()
pipeline_options.view_as(StandardOptions).runner = 'BlockingDataflowPipelineRunner'
pipeline_options.view_as(SetupOptions).save_main_session = True
google_cloud_options = pipeline_options.view_as(GoogleCloudOptions)
google_cloud_options.project = "project-name"
google_cloud_options.job_name = "job-name"
google_cloud_options.staging_location = 'gs://path/to/bucket/staging'
google_cloud_options.temp_location = 'gs://path/to/bucket/temp'
p = beam.Pipeline(options=pipeline_options)
p.run()
Below is a sample from the beginning and the end of the Traceback:
WARNING: Could not acquire lock C:\Users\ghousains\AppData\Roaming\gcloud\credentials.lock in 0 seconds
WARNING: The credentials file (C:\Users\ghousains\AppData\Roaming\gcloud\credentials) is not writable. Opening in read-only mode. Any refreshed credentials will only be valid for this run.
Traceback (most recent call last):
File "formatter_debug.py", line 133, in <module>
p.run()
File "C:\Miniconda3\envs\beam\lib\site-packages\apache_beam\pipeline.py", line 159, in run
return self.runner.run(self)
....
....
....
File "C:\Miniconda3\envs\beam\lib\sitepackages\apache_beam\runners\dataflow_runner.py", line 172, in run
self.dataflow_client.create_job(self.job))
StockPickler.save_global(pickler, obj)
File "C:\Miniconda3\envs\beam\lib\pickle.py", line 754, in save_global (obj, module, name))
pickle.PicklingError: Can't pickle <class 'apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum'>: it's not found as apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum
I've found that your error gets raised when a Pipeline object is included in the context that gets pickled and sent to the cloud:
pickle.PicklingError: Can't pickle <class 'apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum'>: it's not found as apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum
Naturally, you might ask:
What's making the Pipeline object unpickleable when it's sent to the cloud, since normally it's pickleable?
If this were really the problem, then wouldn't I get this error all the time - isn't a Pipeline object normally included in the context sent to the cloud?
If the Pipeline object isn't normally included in the context sent to the cloud, then why is a Pipeline object being included in my case?
(1)
When you call p.run() on a Pipeline with cloud=True, one of the first things that happens is that p.runner.job=apiclient.Job(pipeline.options) is set in apache_beam.runners.dataflow_runner.DataflowPipelineRunner.run.
Without this attribute set, the Pipeline is pickleable. But once this is set, the Pipeline is no longer pickleable, since p.runner.job.proto._Message__tags[17] is a TypeValueValuesEnum, which is defined as a nested class in apache_beam.internal.clients.dataflow.dataflow_v1b3_messages. AFAIK nested classes cannot be pickled (even by dill - see How can I pickle a nested class in python?).
(2)-(3)
Counterintuitively, a Pipeline object is normally not included in the context sent to the cloud. When you call p.run() on a Pipeline with cloud=True, only the following objects are pickled (and note that the pickling happens after p.runner.job gets set):
If save_main_session=True, then all global objects in the module designated __main__ are pickled. (__main__ is the script that you ran from the command line).
Each transform defined in the pipeline is individually pickled
In your case, you encountered #1, which is why your solution worked. I actually encountered #2 where I defined a beam.Map lambda function as a method of a composite PTransform. (When composite transforms are applied, the pipeline gets added as an attribute of the transform...) My solution was to define those lambda functions in the module instead.
A longer-term solution would be for us to fix this in the Apache Beam project. TBD!
This should be fixed in the google-dataflow 0.4.4 sdk release with https://github.com/apache/incubator-beam/pull/1485
I resolved this problem by encapsulating the body of the main within a run() method and invoking run().
Below is the domain:
package com.test
class Person{
String name
static mappedBy= [friends:'none']
static hasMany=[friends:Person]
}
It works well for normal cases, but When I tried to test it by mocking using #Mock annotation in spock for save, got exception below:
| org.grails.datastore.mapping.model.IllegalMappingException:
Non-existent mapping property [none] specified for property [friends] in class [com.test.Person]
at org.grails.datastore.mapping.model.config.GormMappingConfigurationStrategy.establishRelationshipForCollection(GormMappingConfigurationStrategy.java:364)
at org.grails.datastore.mapping.model.config.GormMappingConfigurationStrategy.getPersistentProperties(GormMappingConfigurationStrategy.java:206)
at org.grails.datastore.mapping.model.AbstractPersistentEntity.initialize(AbstractPersistentEntity.java:87)
at org.grails.datastore.mapping.model.config.GormMappingConfigurationStrategy.getOrCreateAssociatedEntity(GormMappingConfigurationStrategy.java:675)
at org.grails.datastore.mapping.model.config.GormMappingConfigurationStrategy.establishDomainClassRelationship(GormMappingConfigurationStrategy.java:632)
at org.grails.datastore.mapping.model.config.GormMappingConfigurationStrategy.getPersistentProperties(GormMappingConfigurationStrategy.java:214)
at org.grails.datastore.mapping.model.AbstractPersistentEntity.initialize(AbstractPersistentEntity.java:87)
at org.grails.datastore.mapping.model.AbstractMappingContext.initializePersistentEntity(AbstractMappingContext.java:250)
at org.grails.datastore.mapping.model.AbstractMappingContext.initialize(AbstractMappingContext.java:239)
at grails.test.mixin.domain.DomainClassUnitTestMixin.initializeMappingContext(DomainClassUnitTestMixin.groovy:150)
at grails.test.mixin.domain.DomainClassUnitTestMixin.mockDomains(DomainClassUnitTestMixin.groovy:144)
at org.spockframework.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:138)
at org.spockframework.runtime.extension.builtin.JUnitFixtureMethodsExtension$FixtureType$FixtureMethodInterceptor.intercept(JUnitFixtureMethodsExtension.java:145)
at org.spockframework.runtime.extension.MethodInvocation.proceed(MethodInvocation.java:84)
at org.spockframework.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:138)
at org.spockframework.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:138)
at org.spockframework.util.ReflectionUtil.invokeMethod(ReflectionUtil.java:138)
Below is the test case:
import grails.test.mixin.Mock
import grails.test.mixin.TestFor
import spock.lang.Specification
#TestFor(PersonController)
#Mock([Person])
class PersonControllerSpec extends Specification{
def "test save person"(){
given:"some person request parameters set for person"
params.putAll([name:'test234', friends:[], action:'save', controller:'person'])
when:"perosn.save is called"
controller.save()
then:"it must create person object"
Person.count() == 1
}
}
Any idea what could be done in this case?
This issue is fixed after grails 2.5.1: https://github.com/grails/grails-core/issues/669
I solved the issue in our project by upgrading to Grails 2.5.5, and updating the used hibernate4 plugin to version hibernate4:5.0.0.RELEASE.
Though there already exists a Jira for same issue which is jira.grails.org/browse/GRAILS-11285
I am able to find out a work around by changing the version of grails-datastore jar as we were getting exception below
Non-existent mapping property [none] specified for property [friends]
in GormMappingConfigurationStrategy.java which is under grails-datastore-core jar.
Hence, after upgrading the version to 3.1.5 from 3.0.6 and just to be safe added a check for test environment as well like below
if (Environment.getCurrentEnvironment() == Environment.TEST) {
compile 'org.grails:grails-datastore-core:3.1.5.RELEASE'
}
Now, the jar would only be added for test environment and we can continue with our test cases.
Cheers!!!
I got this error when I run Fastlane's Snapshot tool:
UIAutomation Error: Script threw an uncaught JavaScript error: Can't find variable: captureLocalizedScreenshot on line 8 of snapshot.js
This is my snapshot.js file:
#import 'SnapshotHelper.js'
var target = UIATarget.localTarget();
var app = target.frontMostApp();
var window = app.mainWindow();
target.delay(3);
captureLocalizedScreenshot('0-LandingScreen');
The problem was the single quote on the import statement.
For anyone like me, addicted to code stylers, make sure it doesn't change this first line.
It should be:
#import "SnapshotHelper"