What causes a breakpoint to fail to bind?
I receive an error when attempting to execute the following expression:
let grid_1 = { grid with Four= grid |> update };; // Updated here
The error I receive is as follows:
Code:
type State = Alive | Dead
type Response = | Survives
| Dies
| Resurected
type Grid = { Zero:State
One:State
Two:State
Three:State
Four:State // Central
Five:State
Six:State
Seven:State
Eight:State }
let isUnderPopulated (grid:Grid) =
let zero = if grid.Zero = Alive then 1 else 0
let one = if grid.One = Alive then 1 else 0
let two = if grid.Two = Alive then 1 else 0
let three = if grid.Three = Alive then 1 else 0
let four = if grid.Four = Alive then 1 else 0
let five = if grid.Five = Alive then 1 else 0
let six = if grid.Six = Alive then 1 else 0
let seven = if grid.Seven = Alive then 1 else 0
let eight = if grid.Eight = Alive then 1 else 0
let livingCount = zero + one + two + three + four + five + six + seven + eight
livingCount < 2
let grid = { Zero = Dead
One = Dead
Two = Dead
Three = Dead
Four = Dead
Five = Dead
Six = Dead
Seven = Dead
Eight = Dead }
let update (grid:Grid) =
let underPopulated = grid |> isUnderPopulated
if underPopulated then Dead
else Alive
let grid_1 = { grid with Four= grid |> update };; // Updated here
You may be trying to bind a breakpoint to an application that is out-of-date, meaning that after changes have been made and before they have been recompiled.
Try:
Compile in Debug mode.
Try to Clean Solution before setting the breakpoint.
Go to the Debug folder, and delete [Your application].pdb file.
Then do a Build or Rebuild your application.
Go to the Debug folder and confirm you have a brand new [Your
application].pdb file.
Then try to set your break point.
You may be not be generating Debug information or you may be in optimization mode which could have an effect on this.
For C++ projects:
Check the following project properties:
C++/General/Debug Information Format: Program Database.
C++/Optimization: Disabled.
C++/Code generation/Runtime library: Multi-threaded Debug.
Linker/Debugging/Generate Debug Info: Yes.
Linker/Debugging/Generate program database:
$(TargetDir)$(TargetName).pdb.
Linker/Manifest File/Generate Manifest: No.
Linker/Manifest File/Allow Isolation: No.
Linker/Embedded IDL/Ignore embedded IDL: Yes.
For C# projects check the following:
In the projects properties set Build/General/Optimize code : Disabled.
In the IDE settings Debug/Options/Debug/General Suppress JIT
optimization on module load (Managed only): Enabled
It may be a bug in your VS.
Try opening your solution in another machines IDE. If you can bind a breakpoint on a different machine this can mean that there is an issue with either your VS or your OS..
Check that your IDE is up-to-date:
There have been reports of issues like this in the VS2013 RTM as well as VS2015 Update 1 and Update2.
In VS go to Tools/Extensions & Updates/Updates/Product Updates and see what version you are running. If an update is needed it will appear there.
It may be due to a bug in the OS.
If your running a Win 10 OS, there was a reported bug regarding this issue which existed in build 14251. This was resolved in build 14257 (and above).
Related
I need to identify if the device is rebooted.
Currently saving time in the database, and periodically check time interval from last boot using the following code as suggested in Apple forums:
func bootTime() -> Date? {
var tv = timeval()
var tvSize = MemoryLayout<timeval>.size
let err = sysctlbyname("kern.boottime", &tv, &tvSize, nil, 0);
guard err == 0, tvSize == MemoryLayout<timeval>.size else {
return nil
}
return Date(timeIntervalSince1970: Double(tv.tv_sec) + Double(tv.tv_usec) / 1_000_000.0)
}
But the problem with this is even without a reboot, tv.tv_sec value differs around 30 (it varies from 0 seconds to 30 secs).
Anybody have any idea about this variation? or any other better way to identify device reboot without using sysctl or other reliable sysctl.
https://developer.apple.com/forums/thread/101874?answerId=309633022#309633022
Any pointer is highly appreciated.
I searched over SO, all the answer points to the solution mentioned here. Which have the issue as I mentioned. Please don't mark duplicate.
I would like to add an interface in AppRTCMobile, this interface can start webrtc Call module, in order to achieve the audio call between two phones (LAN, already know both the IP address and port number), but when I run successfully , The software crashes every time an exception occurs when the method is called by RtcEventLog. I do not know if Calling Call is reasonable or not. I sincerely thank you for your help in the absence of a solution.
Below the source code, please help me find the problem.
std::unique_ptr<RtcEventLog> event_log = webrtc::RtcEventLog::Create();
webrtc::Call::Config callConfig = webrtc::Call::Config(event_log.get());
callConfig.bitrate_config.max_bitrate_bps = 500*1000;
callConfig.bitrate_config.min_bitrate_bps = 100*1000;
callConfig.bitrate_config.start_bitrate_bps = 250*1000;
webrtc::AudioState::Config audio_state_config = webrtc::AudioState::Config();
cricket::VoEWrapper* g_voe = nullptr;
rtc::scoped_refptr<webrtc::AudioDecoderFactory> g_audioDecoderFactory;
g_audioDecoderFactory = webrtc::CreateBuiltinAudioDecoderFactory();
g_voe = new cricket::VoEWrapper();
audio_state_config.audio_processing = webrtc::AudioProcessing::Create();
g_voe->base()->Init(NULL,audio_state_config.audio_processing,g_audioDecoderFactory);
audio_state_config.voice_engine = g_voe->engine();
audio_state_config.audio_mixer = webrtc::AudioMixerImpl::Create();
callConfig.audio_state = AudioState::Create(audio_state_config);
std::unique_ptr<RtcEventLog> event_logg = webrtc::RtcEventLog::Create();
callConfig.event_log = event_logg.get();
g_call = webrtc::Call::Create(callConfig);
g_audioSendTransport = new AudioLoopbackTransport();
webrtc::AudioSendStream::Config config(g_audioSendTransport);
g_audioSendChannelId = g_voe->base()->CreateChannel();
config.voe_channel_id = g_audioSendChannelId;
g_audioSendStream = g_call->CreateAudioSendStream(config);
webrtc::AudioReceiveStream::Config AudioReceiveConfig;
AudioReceiveConfig.decoder_factory = g_audioDecoderFactory;
g_audioReceiveChannelId = g_voe->base()->CreateChannel();
AudioReceiveConfig.voe_channel_id = g_audioReceiveChannelId;
g_audioReceiveStream = g_call->CreateAudioReceiveStream(AudioReceiveConfig);
g_audioSendStream->Start();
g_audioReceiveStream->Start();
Here's a screenshot of the error that occurred when the crash occurred. Please tell me if you want to know more.
Your code crashed at event_log_->LogAudioPlayout()...
It's obviously that event_log_ object is already released.
Objects which are managed by unique_ptr or scoped_refptr will be released after execution, but these objects may still be used in your case, that will lead to crash problem. So put these objects in global memory or retain them.
I already experienced this issue in the past, I fixed it.
But today I just download the new Xcode version 9.1 and my App is not building anymore, I got :
Ambiguous reference to member 'filter'
I don't know why, this is not the piece of code I was working on. The app is building/compiling fine since weeks.
When I check the Release Note on the Official Apple Website, I don't seem to find any reference to my issue.
So here is the piece of code that was working perfectly 2 hours ago :
var severeWeather: Results<SevereWeather>?
var vigiArray = Array<SevereWeather>()
var redCount: Int = 0
severeWeather = realm.objects(SevereWeather.self).filter(NSPredicate(format: "department != nil"))
.sorted(byKeyPath: "departmentNumber")
vigiArray = Array(severeWeather!)
redCount = vigiArray.filter { $0.dangerLevels.filter { $0.level.value == 4 }.count > 0 }.count
What is wrong with my code ?
RealmCollection also has a filter method, which is implemented differently. For some reason, Swift compiler doesn't know which one it should refer.
What about this:
redCount = vigiArray.filter {
return $0.dangerLevels.filter(NSPredicate(format: "%K == %#", "level.value", NSNumber(integerLiteral: 4))).count > 0
}.count
Instead of accessing each element in dangerLevels yourself, build Predicate and let realm do the job for you.
Replace your nest filter with this and it should be compiling just fine.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I was wondering if (and how much) strong/weak references management have an impact on code execution, especially when freeing objects to which many classes might have a weak reference. At first I mistaken this for ARC, but it's not.
There's a similar question on the same topic, however they don't investigate performance impact or try to pull a number out of it.
Let me be clear: I'm not, in any way, suggesting that ARC or strong/weak might have a bad impact on performance, or say "don't use this". I LOVE this. I'm just curious about how efficient it is, and how to size it.
I've put together this piece of code to get an idea of the performance impact of strong/weak references in execution time.
import Foundation
class Experiment1Class {
weak var aClass: Experiment1Class?
}
class Experiment2Class {
var aClass: Experiment2Class?
}
var persistentClass: Experiment1Class? = Experiment1Class()
var nonWeakPersistentClass: Experiment2Class? = Experiment2Class()
var classHolder = [Experiment1Class]()
var nonWeakClassholder = [Experiment2Class]()
for _ in 1...1000 {
let aNewClass = Experiment1Class()
aNewClass.aClass = persistentClass
classHolder.append(aNewClass)
let someNewClass = Experiment2Class()
someNewClass.aClass = nonWeakPersistentClass
nonWeakClassholder.append(someNewClass)
}
let date = Date()
persistentClass = nil
let date2 = Date()
let someDate = Date()
nonWeakPersistentClass = nil
let someDate2 = Date()
let timeExperiment1 = date2.timeIntervalSince(date)
let timeExperiment2 = someDate2.timeIntervalSince(someDate)
print("Time: \(timeExperiment1)")
print("Time: \(timeExperiment2)")
This piece of code only measure the amount of time it takes to free an object and to set to nil all its references.
IF you execute it in Playground (Xcode 8.3.1) you will see a ratio of 10:1 but Playground execution is much slower than real execution, so I also suggest to execute the above code with "Release" build configuration.
If you execute in Release I suggest you set the iteration count to "1000000" at least, the way I did it:
insert the above code into file test.swift
from terminal, run swiftc test.swift
execute ./test
Being this kind of test, I believe the absolute results makes no to little sense, and I believe this has no impact on 99% of the usual apps...
My results so far shows, on Release configuration executed on my Mac:
Time: 3.99351119995117e-06
Time: 0.0
However the same execution, in Release, on my iPhone 7Plus:
Time: 1.4960765838623e-05
Time: 1.01327896118164e-06
Clearly this test shows there should be little to no concern for real impact on typical apps.
Here are my questions:
Can you think of any other way to measure strong/weak references impact on execution time? (I wonder what strategies the system put in place to improve on this)
What other metrics could be important? (like multi threading optimisation or thread locking)
How can I improve this?
EDIT 1
I found this LinkedList test to be very interesting for a few reasons, consider the following code:
//: Playground - noun: a place where people can play
import Foundation
var n = 0
class LinkedList: CustomStringConvertible {
var count = n
weak var previous: LinkedList?
var next: LinkedList?
deinit {
// print("Muorte \(count)")
}
init() {
// print("Crea \(count)")
n += 1
}
var description: String {
get {
return "Node \(count)"
}
}
func recurseDesc() -> String {
return(description + " > " + (next?.recurseDesc() ?? "FIN"))
}
}
func test() {
var controlArray = [LinkedList]()
var measureArray = [LinkedList]()
var currentNode: LinkedList? = LinkedList()
controlArray.append(currentNode!)
measureArray.append(currentNode!)
var startingNode = currentNode
for _ in 1...31000 {
let newNode = LinkedList()
currentNode?.next = newNode
newNode.previous = currentNode!
currentNode = newNode
controlArray.append(newNode)
measureArray.append(newNode)
}
controlArray.removeAll()
measureArray.removeAll()
print("test!")
let date = Date()
currentNode = nil
let date2 = Date()
let someDate = Date()
startingNode = nil
let someDate2 = Date()
let timeExperiment1 = date2.timeIntervalSince(date)
let timeExperiment2 = someDate2.timeIntervalSince(someDate)
print("Time: \(timeExperiment1)")
print("Time: \(timeExperiment2)")
}
test()
I found the following (running in Release configuration):
I couldn't be able to run more than ~32000 iterations on my phone, it's crashing of EXC_BAD_ACCESS during deinit (yes, during DEINIT... isn't this weird)
Timing for ~32000 iteration is 0 vs 0.06 seconds, which is huge!
I think this test is very CPU intensive because nodes are freed one after the other (if you see the code, only the next one is strong, the previous is weak)... so once the first one is freed, the others fall one after the other, but not altogether.
There is really no such thing as automatic memory management. Memory is managed by retain and release regardless of whether you use ARC. The difference is who writes the code, you or the compiler. There is manual memory management code that you write, and there is manual memory management code that ARC writes. In theory, ARC inserts into your code exactly the same retain and release commands that you would have inserted if you had done this correctly. Therefore the difference in performance should be epsilon-tiny.
Hi I am developing an app in which I require to chache 50 images (size of all images is 2.5 mb) ,It is chaching the images but also increases the memory by 10 mb in Apple Watch App due to which app crashes .
Xcode gives error in xCode “Message from debugger: Terminated due to memory error”
The code i am using is below :
for (var i : Int = 1; i<26; i++) {
let filenameHuman = NSString(format: "human_%d", i )
let filenameZombie = NSString(format: "zombie_%d", i )
var imageHuman : UIImage! = UIImage(named: filenameHuman as String)
var imageZombie : UIImage! = UIImage(named: filenameZombie as String)
WKInterfaceDevice.currentDevice().addCachedImage(imageZombie, name: filenameZombie as String)
WKInterfaceDevice.currentDevice().addCachedImage(imageHuman, name: filenameHuman as String)
}
NSLog("Currently cached images: %#",WKInterfaceDevice.currentDevice().cachedImages)
Also the screenshot of Memory allocation and memory leak is :
Please Help, Thanks in advance .
Are any of your images actually animations (that would use up more space)?
Collect the return value of each call to addCachedImage(). False means it could not be added -- you need to check to that and it might give clues as to a particular problem image.
Before calling anything, try to empty the cache, removeAllCachedImages. This means you will be clean from previous cache interactions using up the pool of memory.
I think your problem is not a leak, I think your problem is over-retained allocations. So use the Allocations tool (with retain count tracking) to see how much memory was allocated (VM allocations) and how many entities are holding on to such memory (retain counts).
Inside your loop try to autorelease the memory that is used by the images since you don't want to wait for autorelease to happen later when the method returns.
for (var i : Int = 1; i<26; i++) {
autoreleasepool {
/* code to cache images */
}
}