I have a boot code at 0x08000000 and an application code at 0x08060000. I can jump to application from boot code if I comment out the condition check shown below:
//if (( (*(__IO uint32_t*)0x08060000) & 0x2FFE0000 ) == 0x20000000)
{
JumpAddress = *(__IO uint32_t*)( 0x08060000 + 4 );
Jump_To_Application = (pFunction)JumpAddress;
__set_MSP( *(__IO uint32_t*)0x08060000 );
Jump_To_Application();
}
The condition is not satisfied as the left side is equal to 0x20020000. I don't understand why it is 0x20020000 instead of being 0x20000000.
Why do we check the content of the start address with 0x20000000. What is stored in this memory address and what should it be normally?
It's a vector table that's located at these addresses (at 0x08000000 for bootloader and at 0x08060000 for application respectively). The first value stored in the vector table is the reset value of the stack pointer.
You can check this link for more information: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0553a/BABIFJFG.html
Why you'd want to check this value this way, one may only guess. It is likely there to act as a kind of safety check to see whether there may be a valid application loaded. It's definitely not sufficient and doesn't guarantee much (e.g. half of the application may be loaded). It also depends entirely on your memory layout and where in RAM you decide to place your stack. I assume you copy-pasted (or generated) some portion of the code responsible for memory layout in your application, then copy-pasted - from another source - this portion of the code that has the check in question. Those two will likely not work together very well.
Related
In selenium there is a method switchTo that can be used to switch to a frame.
Is there anything similar in Playwright?
Yes. The syntax is a bit different than Selenium but the idea is more or less the same. You can use frameLocator to drill down to the frame and interact with elements contained within. Think of it like switchTo().
For example, you have a <iframe> with a login form inside, and want to fill it:
const email = 'foo#bar.com'
await page.frameLocator('#sillyWidget iframe').locator('#username').click();
for (let i = 0; i < email.length; i += 1) {
await page.keyboard.press(email[i], { delay: 20 + Math.random() * 5 });
}
List all the frames on the page:
const frames = await page.childFrames();
for (const frame of frames) {
console.debug('Frame --> ' + frame.url())
}
Listen for new frames:
page.on('frameattached', frames => frames.forEach(frame => console.debug('Frame attached' + frame.url())));
Edit:
A Page already comprises the main Frame and all subframes, so the switchTo() analogy kind of breaks down in that regard. The frame already exists
inside the Page, so switching to it doesn't make sense.
In the case you want to perform many operations in a child frame, just grab a reference to it, and work with it like you would a Page (nearly all the same functions are available).
Also note that:
Playwright uses ChromeDevTools protocol to speak to the browser.
Selenium uses JSONW.
They are fundamentally different beasts, though some syntax is the same. CDP is orders of magnitude more powerful, and directly supported by chromium-based browsers by design.
const frame = page.frame('frame-login'); // reference to your frame
await frame.fill('#username-input', 'Phillip Masse'); // fill an input inside the frame
// [1]
await Promise.all([page.click('#btn'), page.waitForNavigation()]; // now do stuff on the page level
[1]: If memory serves, in selenium to accomplish this we'd need to driver.switch_to_default_content() here.
Context of Selenium driver vs Playwright page
Tl:dr - Driving the browser and everything inside it vs interacting with the browser and specific tabs (or “pages”) within it
Not exactly. Playwright takes a slightly different approach. Instead of just a driver which contains all the methods to drive the browser and anything within it, Playwright splits this into more specific pieces - browser, context, and page.
Browser is for the browser itself as a whole, which can contain many contexts and tabs.
BrowserContext for specific independent browsing sessions (for instance, a tab opened by another tab will be part of the same session or context).
Page for controlling a specific tab (or “page”) within the browser/context. Each separate tab will have its own Page instance to represent it.
The benefits of this include being able to share context between pages, working with multiple tabs simultaneously more easily, and other areas where the separation is useful. You’re not tied down to just using one driver instance for everything.
Specific Answer
For your question there’s an added piece, Frames, which you can access directly from a Page still, while also allowing you to interact with it separately/as its own entity or “page” in a way. The main page is really just its own top frame with page content, and each iframe is basically its own page with its own content.
The closest thing to switchTo here would be to just use .frame() to get the specific Frame and interact with it, whether by calling .frame each time or just storing, using, and passing around the frame itself. It has most of the main methods of Page you would use/need anyway, so in many cases can just be used in its place. So while not exactly like switchTo in making the page interact with the other frame (or tab) and having to switch back, because that was actually telling the driver to drive one vs the other, you can just access the Frame itself to interact with for those parts, and keep the page representing the full page.
Note that there is a difference between FrameLocator and Frame. The first solely provides a way to locate elements within an iframe, whereas the second is like another Page specific to that frame allowing you to interact with it similarly.
We're using Realm (Swift binding currently in version 3.12.0) from the earliest days in our project. In some early versions before 1.0 Realm provided change listeners for Results without actually giving changeSets.
We used this a lot in order to find out if a specific Results list changed.
Later the guys at Realm exchanged this API with changeSet providing methods. We had to switch and are now mistreating this API just in order to find out if anything in a specific List changed (inserts, deletions, modifications).
Together with RxSwift we wrote our own implementation of Results change listening which looks like this:
public var observable: Observable<Base> {
return Observable.create { observer in
let token = self.base.observe { changes in
if case .update = changes {
observer.onNext(self.base)
}
}
observer.onNext(self.base)
return Disposables.create(with: {
observer.onCompleted()
token.invalidate()
})
}
}
When we now want to have consecutive updates on a list we subscribe like so:
someRealm.objects(SomeObject.self).filter(<some filter>).rx.observable
.subscribe(<subscription code that gets called on every update>)
//dispose code missing
We wrote the extension on RealmCollection so that we can subscribe to List type as well.
The concept is equal to RxRealm's approach.
So now in our App we have a lot of filtered lists/results that we are subscribing to.
When data gets more and more we notice significant performance losses when it comes to seeing a change visually after writing something into the DB.
For example:
Let's say we have a Car Realm Object class with some properties and some 1-to-n and some 1-to-1 relationships. One of the properties is a Bool, namely isDriving.
Now we have a lot of cars stored in the DB and bunch of change listeners with different filters listing to changes of the cars collection (collection observers listening for changeSets in order to find out if the list was changed).
If I take one car of some list and set the property of isDriving from false to true (important: we do writes in the background) ideally the change listener fires fast and I have the nearly immediate correct response to my write on the main thread.
Added with edit on 2019-06-19:
Let's make the scenario still a little more real:
Let's change something down the hierarchy, let's say the tires manufacturer's name. Let's say a Car has a List<Tire>, a Tire has a Manufacturer and a Manufacturer has aname.
Now we're still listing toResultscollection changes with some more or less complex filters applied.
Then we're changing the name of aManufacturer` which is connected to one of the tires which are connected to one of the cars which is in that filtered list.
Can this still be fast?
Obviously when the length of results/lists where change listeners are attached to gets longer Realm's internal change listener takes longer to calculate the differences and fires later.
So after a write we see the changes - in worst case - much later.
In our case this is not acceptable. So we are thinking through different scenarios.
One scenario would be to not use .observe on lists/results anymore and switch to Realm.observe which fires every time anything did change in the realm, which is not ideal, but it is fast because the change calculation process is skipped.
My question is: What can I do to solve this whole dilemma and make our app fast again?
The crucial thing is the threading stuff. We're always writing in the background due to our design. So the writes itself should be very fast, but then that stuff needs to synchronize to the other threads where Realms are open.
In my understanding that happens after the change detection for all Results has run through, is that right?
So when I read on another thread, the data is only fresh after the thread sync, which happens after all notifications were sent out. But I am not sure currently if the sync happens before, that would be more awesome, did not test it by now.
I have created a entity called #USER-NAME and have set that as a requirement.
Now, for the first time when the entity is detected in the conversation - say, "I am John" , then the memory is set to John. On subsequent encounter of the same entity with different value - "I am Dave", the memory remains unchanged.
I have seen the edit memory option, which provides 1. reset memory 2. set to a value . For the option 2, it does not provide a way to set to the value of #USER-NAME, instead only provides option to enter static values.
How can I update the memory every time the value of the entity changes ??
EDIT
Hi, I am attaching some screenshots to show what's exactly going wrong.
I have a Entity named '#USER_NAME' that saves the user name in a memory variable .
I make the following conversation -
The JSON payload after the conversation is as follows. This works perfectly-
I update the conversation again by providing a new user name.
This triggers the entity just fine. You can see the entity being detected properly.
However, the memory value remains the same.
What I wanted was the memory variable to replace 'Dev' with 'John'.
Remember that:
memory <> Intent
You can set memory in the message section or update automatically using for example a requirement in this case every time the skill is trigged it will replace the value in the memory ID
EDIT: Because the set memory field expect a JSON you can't use memory as you want, but if you reset that memory ID shomewhere relevant in the chat (in my sample I delete it right after saying Hi XXX) so when the skill is trigged again it will "replace" it with the new value
In the Requirement I set the golden entity #Person to variable "name" and if is missing I ask her name.
Sample Image
the memory is a persistent object so if you want to reset it you need either to have specific conditions within the builder or go through a webhook to have a backend code to reste the memory.
Problem Context
I am trying to generate a total (linear) order of event items per key from a real-time stream where the order is event time (derived from the event payload).
Approach
I had attempted to implement this using streaming as follows:
1) Set up a non overlapping sequential windows, e.g. duration 5 minutes
2) Establish an allowed lateness - it is fine to discard late events
3) Set accumulation mode to retain all fired panes
4) Use the "AfterwaterMark" trigger
5) When handling a triggered pane, only consider the pane if it is the final one
6) Use GroupBy.perKey to ensure all events in this window for this key will be processed as a unit on a single resource
While this approach ensures linear order for each key within a given window, it does not make that guarantee across multiple windows, e.g. there could be a window of events for the key which occurs after that is being processed at the same time as the earlier window, this could easily happen if the first window failed and had to be retried.
I'm considering adapting this approach where the realtime stream can first be processed so that it partitions the events by key and writes them to files named by their window range.
Due to the parallel nature of beam processing, these files will also be generated out of order.
A single process coordinator could then submit these files sequentially to a batch pipeline - only submitting the next one when it has received the previous file and that downstream processing of it has completed successfully.
The problem is that Apache Beam will only fire a pane if there was at least one time element in that time window. Thus if there are gaps in events then there could be gaps in the files that are generated - i.e. missing files. The problem with having missing files is that the coordinating batch processor cannot make the distinction between knowing whether the time window has passed with no data or if there has been a failure in which case it cannot proceed until the file finally arrives.
One way to force the event windows to trigger might be to somehow add dummy events to the stream for each partition and time window. However, this is tricky to do...if there are large gaps in the time sequence then if these dummy events occur surrounded by events much later then they will be discarded as being late.
Are there other approaches to ensuring there is a trigger for every possible event window, even if that results in outputting empty files?
Is generating a total ordering by key from a realtime stream a tractable problem with Apache Beam? Is there another approach I should be considering?
Depending on your definition of tractable, it is certainly possible to totally order a stream per key by event timestamp in Apache Beam.
Here are the considerations behind the design:
Apache Beam does not guarantee in-order transport, so there is no use within a pipeline. So I will assume you are doing this so you can write to an external system with only the capability to handle things if they come in order.
If an event has timestamp t, you can never be certain no earlier event will arrive unless you wait until t is droppable.
So here's how we'll do it:
We'll write a ParDo that uses state and timers (blog post still under review) in the global window. This makes it a per-key workflow.
We'll buffer elements in state when they arrive. So your allowed lateness affects how efficient of a data structure you need. What you need is a heap to peek and pop the minimum timestamp and element; there's no built-in heap state so I'll just write it as a ValueState.
We'll set a event time timer to receive a call back when an element's timestamp can no longer be contradicted.
I'm going to assume a custom EventHeap data structure for brevity. In practice, you'd want to break this up into multiple state cells to minimize the data transfered. A heap might be a reasonable addition to primitive types of state.
I will also assume that all the coders we need are already registered and focus on the state and timers logic.
new DoFn<KV<K, Event>, Void>() {
#StateId("heap")
private final StateSpec<ValueState<EventHeap>> heapSpec = StateSpecs.value();
#TimerId("next")
private final TimerSpec nextTimerSpec = TimerSpec.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
ProcessContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = firstNonNull(
heapState.read(),
EventHeap.createForKey(ctx.element().getKey()));
heap.add(ctx.element().getValue());
// When the watermark reaches this time, no more elements
// can show up that have earlier timestamps
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
#OnTimer("next")
public void onNextTimestamp(
OnTimerContext ctx,
#StateId("heap") ValueState<EventHeap> heapState,
#TimerId("next") Timer nextTimer) {
EventHeap heap = heapState.read();
// If the timer at time t was delivered the watermark must
// be strictly greater than t
while (!heap.nextTimestamp().isAfter(ctx.timestamp())) {
writeToExternalSystem(heap.pop());
}
nextTimer.set(heap.nextTimestamp().plus(allowedLateness);
}
}
This should hopefully get you started on the way towards whatever your underlying use case is.
Any advice on how I can replace the following code from AX 2009?
display amount mrpg_limit()
{
return HRPLimitTableRelationship::find("Spending", emplTable_1.PartyId).LimitValue;
}
The HRPLimittableRelationship table was moved to multiple tables
HRPApprovedLimit
HRPApprovedLimitAmount
If you want more details about the process of how this was changed, look at the data upgrade scripts on msdn: http://msdn.microsoft.com/EN-US/library/jj737032.aspx
This TechNet article describes the actual creation of signing limits, in case you need it for testing.
http://technet.microsoft.com/en-us/library/hh271654.aspx
Taking a shot in the dark on getting the information you wanted, there is a class called PurchReqDocument with a static method of spendingLimitStatic, where you supply the worker recid and currency code.