breakpoint for particular groups in xcode - ios

I want to put breakpoint for all the functions in my app. From this stackoverflow(How to automatically set breakpoints on all methods in XCode?) question i got this -
breakpoint set -r . -s [PRODUCT_NAME]
And its working also. But it puts breakpoint in files coming from cocoapod, which i dont want. I only want to put breakpoint in my code only. As there are many pod files and it gets lost in those file and then its difficult to understand the flow.I want to put breakpoint in particular group of files.
How can we do this?

You can say:
(lldb) break set -r . -f <FILE1> -f <FILE2>
If you do it this way you will have to list all the files by hand, but it will get the job done. There isn't currently a version of the file specifier that takes glob patterns.
Note, if you do:
(lldb) help break set
The first part of the listing will show you what options the command accepts. So for instance:
breakpoint set [-DHo] -r <regular-expression> [-s <shlib-name>] [-i <count>] [-c <expr>] [-x <thread-index>] [-t <thread-id>] [-T <thread-name>] [-q <queue-name>] [-f <filename>] [-L <language>] [-K <boolean>] [-N <breakpoint-name>]
shows that the break set -r command accepts filenames. The help doesn't say you can specify this multiple times on a line, but it's worth trying, and in fact it does...

Related

Make offsetting file contents during build

I'm trying to use Make to ... make modular Dockerfiles. Long story short, I want to centralize certain elements and make the composable and reusable, like classes and functions really, but the Dockerfile syntax does not - and according to the developers, will not - offer any facilities in the image of C's #include or similar composability solutions. Not to worry, #include and friends to the rescue!
Except...
I have the following Makefile in my project:
BUILD_DIR := ${CI_PROJECT_DIR}/build
TEMPLATE_FILES := $(shell find ${CI_PROJECT_DIR} -name '*.build')
TEMPLATE_FILENAMES := $(foreach file,$(TEMPLATE_FILES),$(BUILD_DIR)/$(notdir $(file)).built)
BUILT_TEMPLATES := $(TEMPLATE_FILENAMES:.build.built=.built)
DOCKER_FILES := $(shell find ${CI_PROJECT_DIR} -name '*.Dockerfile')
DOCKER_OBJS := $(foreach file,$(DOCKER_FILES),$(BUILD_DIR)/$(notdir $(file)))
all: $(BUILT_TEMPLATES) $(DOCKER_OBJS)
$(BUILD_DIR)/%.built: $(TEMPLATE_FILES) $(BUILD_DIR) # build any templated Dockerfiles
cpp -E -P -o $(BUILD_DIR)/$(notdir $#) -I ${CI_PROJECT_DIR}/modules $<
sed -i 's/__NL__ /\n/g' $(BUILD_DIR)/$(notdir $#)
$(BUILD_DIR)/%.Dockerfile: $(DOCKER_FILES) $(BUILD_DIR)
cp $< $(BUILD_DIR)/$(notdir $(#))
$(BUILD_DIR):
mkdir -p $(BUILD_DIR)
.PHONY: clean
clean:
-rm -r $(BUILD_DIR)
The objective is to run the templated Dockerfiles through GCC to compile the #includes in them into proper Docker instructions, and just copy the rest of the files. Sounds simple enough.
Except that it looks like all the target files are "offset" from their sources - like the file names are correct, but the contents are from a file elsewhere in the list, and with no discernible order either.
One thing that I'm fairly sure is wrong - but even more wrong otherwise - is the line
$(BUILD_DIR)/%.built: $(TEMPLATE_FILES) $(BUILD_DIR) # build any templated Dockerfiles
By all manuals and documentation, it ought to be
$(BUILD_DIR)/%.built: %.build $(BUILD_DIR) # build any templated Dockerfiles
but that's even worse, because then Make just says make: *** No rule to make target '/docker/build/runner-dart-2-18-firebase.built', needed by 'all'. Stop.
I'm out of ideas here, along with my limited knowledge of Make. What am I missing to make Make make - sorry - my Dockerfiles?
This line:
$(BUILD_DIR)/%.built: $(TEMPLATE_FILES) $(BUILD_DIR)
Says that if make wants to build a target that matches that pattern, and it can find all the prerequisites, then the pattern rule matches and the recipe can be used. Let's ignore BUILD_DIR (note that it's always a bad idea to list a directory as a prerequisite, but that's not causing this problem). Suppose TEMPLATE_FILES is set to the value ./foo/foo.build ./bar/bar.build. Now the above rule expands to:
./build/%.built: ./foo/foo.build ./foo/bar.build ./build
What is the recipe?
cpp -E -P -o $(BUILD_DIR)/$(notdir $#) -I ${CI_PROJECT_DIR}/modules $<
First it's always wrong to create a file that is not exactly $# so you should use just $# not $(BUILD_DIR)/$(notdir $#). But more importantly, what will $< be set to? It is always set to the first prerequisite, and the first prerequisite is always ./foo/foo.build. So every time you run this recipe, regardless of which .built file you're trying to create, you will always be preprocessing the first .build file.
Your idea that you want this instead:
$(BUILD_DIR)/%.built: %.build $(BUILD_DIR)
is correct, in general. Why do you get the error? Because if you are trying to build the target ./build/foo.built, then the stem (part that matches %) is foo. Then make will look to see if the prerequisite foo.build exists or can be created, because you said the prerequisite is %.build. That file does NOT exist and CANNOT be created (make doesn't know how to create it), because the file is ./foo/foo.build not foo.build which is a totally different file.
You have three options. You can either write separate rules for each source directory:
$(BUILD_DIR)/%.built: foo/%.build
...
$(BUILD_DIR)/%.built: bar/%.build
...
Or, you can change your generated files so they are not all in the same directory but instead keep the source directory structure; you would change this:
TEMPLATE_FILENAMES := $(foreach file,$(TEMPLATE_FILES),$(BUILD_DIR)/$(notdir $(file)).built)
BUILT_TEMPLATES := $(TEMPLATE_FILENAMES:.build.built=.built)
to just this:
BUILT_TEMPLATES := $(patsubst %.build,$(BUILD_DIR)/%.built,$(TEMPLATE_FILES))
then create the output directory as part of the recipe:
#mkdir -p $(#D)
cpp -E -P -o $# -I ${CI_PROJECT_DIR}/modules $<
sed -i 's/__NL__ /\n/g' $#
Or finally, you could use VPATH to tell make what directories to look in to find the *.build files:
VPATH := $(sort $(dir $(TEMPLATE_FILES)))
(note, you should choose only one of these options).

Why can't ld called from MSYS find (existing static) library when arguments are read from a response #file containing backslashes?

This is basically the same issue as in mingw ld cannot find some library which is exist in the search path, MinGW linker can't find MPICH2 libraries - and I'm aware that there are heaps of posts on StackOverflow regarding the issue of static and dynamic linking with MinGW - but I couldn't find anything that explains how I can troubleshoot.
I am building a project with a huge linker command like (via g++) on MinGW, in a MSYS2 shell (git-bash.exe). The process fails with, among others:
/z/path/to/../../../../i686-w64-mingw32/bin/ld.exe: cannot find -lssl
I add -Wl,--verbose to the g++ linker call (to be passed to ld), and I can see for the -L/z/path/to/libs/openssl/lib/mingw -lssl:
...
attempt to open /z/path/to/libs/openssl/lib/mingw/libssl.a failed
...
/z/path/to/libs/openssl/lib/mingw/ssl.dll failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
But this is weird, because the file exists?
$ file /z/path/to/libs/openssl/lib/mingw/libssl.a
/z/path/to/libs/openssl/lib/mingw/libssl.a: current ar archive
(... and it was built with the same compiler on the same machine)?
Weirdly, once it attempts to open with forward slash .../libssl.a, once with backslash ...\libssl.a - but at least the first path checks out in a bash shell, as shown above?
It gets even worse if I try to specify -l:libssl.a -- or if I specify -L/z/path/to/libs/openssl/lib/mingw -Wl,-Bstatic -lssl -- instead; then all attempts to open are with a backslash:
...
attempt to open /z/path/to/scripts/other/build/openssl/build/mingw/lib\libssl.a failed
attempt to open /z/path/to/libs/openssl/lib/mingw\libssl.a failed
...
To top it all off, if I look it up manually through the command line using ld, it is found ?!:
$ ld -L/z/path/to/libs/openssl/lib/mingw -lssl --verbose
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/ssl.dll.a failed
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
Does anyone have an idea why this happens, and how can I get ld to finally find these libraries? Or rather - how can I troubleshoot, and understand why these libraries are not found, when they exist at the paths where ld tries to open them?
OK, found something more - not sure if this is a bug; but my problem is that I'm actually reading arguments from a file (otherwise I get g++: Argument list too long). So, to simulate that:
$ echo " -Wl,--verbose -L/z/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
# nothing
$ g++ `cat tcmd3` 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
... it turns out, if the very same arguments are fed on the command line, then static library lookup succeeds - but if the arguments are read from file through the # at-sign, then static library lookup fails?! Unfortunately, I cannot use on my actual project, since even with cat, I'd still get g++: Argument list too long ... So how can I fix this?
MSYS has special handling of directories as arguments when they are used in the shell. This translates e.g. /<drive_letter>/blabla to the proper Windows style paths. This is to accomodate Unix programs that don't handle Z: style directory root.
What you see here is that MSYS isn't performing this interpretation for string read from a file. When you think about it, it's very logical, but as you have experienced first-hand, also sometimes annoying.
Long story short: don't put Unix style paths in files with command arguments. Instead, pass them through e.g. cygpath -w, which works in MSYS2 (which should be the MSYS that Git for Windows 2+ comes with).
Ok, with some more experiments, I noticed that:
-L/z/path/to/libs/openssl/lib/mingw, the Unix path specification, tends to fail - while if we specify the same, except starting with a Windows drive letter, that is:
-LZ:/path/to/libs/openssl/lib/mingw, then things work - also from an arguments file with # at-sign:
$ echo " -Wl,--verbose -LZ:/path/to/libs/openssl/lib/mingw -lssl -lcrypto " > tmcd3
$ g++ #tcmd3 2>&1 | grep succeeded | grep ssl
attempt to open Z:/path/to/libs/openssl/lib/mingw/libssl.a succeeded
attempt to open Z:/path/to/libs/openssl/lib/mingw/libcrypto.a succeeded
I guess, since the shell is MSYS2/git-bash.exe, entering full POSIX paths on the shell with /z/... is not a problem, because the shell will convert them - but in a file, there is nothing to convert them, so we must use Windows/MingW convention to specify them...

grep warning: recursive directory loop

I'm searching recursively some location e.g. /cygdrive/c/dev/maindir/dir/
There's a loop inside that directory structure i.e. there's a link .../maindir/dir/loopedDir/loopedDir pointing to .../maindir/dir/loopedDir.
When I run:
grep --exclude="/cygdrive/c/dev/maindir/dir/loopedDir/loopedDir" 'myPattern' -R /cygdrive/c/dev/maindir/dir/
...it works fine, like expected and finds what I need.
However, I also get a warning:
grep: warning: /cygdrive/c/dev/maindir/dir/loopedDir/loopedDir: recursive directory loop
...and I'm wondering why is that. Shouldn't dir exclusion prevent this particular looping occurance? How should I modify my query in order not to get the warning?
Add grep's option -s to suppress this and other error messages.

How to print messages after make done with cmake?

I'm trying to print messages after building process done using CMake.
I just want to inform the user after make command is done without any error.
How can I do it? I tried add_custom_target() but I cannot choose when to run.
Also, I tried add_custom_command(), again it doesn't give me the right result.
Any idea?
Thank you for your idea in advance.
You could, indeed, do the following:
add_custom_target( FinalMessage ALL
${CMAKE_COMMAND} -E cmake_echo_color --cyan "Compilation is over!"
COMMENT "Final Message" )
add_dependencies( FinalMessage ${ALL_TARGETS} )
That custom target depending on the list of all the targets you previously defined, you make sure it will be run last.
To print a message after building a specific target, e. g. make yourtarget, you can use
add_custom_command(TARGET yourtarget POST_BUILD
COMMAND ${CMAKE_COMMAND} -E cmake_echo_color --cyan
"Message after yourtarget has been built.")
Instead of POST_BUILD, you could also use PRE_BUILD or PRE_LINK for other purposes, see documentation.
(You specified in the comments, that you like to print a message after all targets, but the original question is less precise. So it might be of some value for people looking here.)
I just resolved the issue with the help of smarquis.
Thank you.
Here's the step by step procedure to do it. Since my source tree are connected complicatedly with add_subdirectory() method, this method can be applied everyone.
Initialize ALL_TARGETS variable cached. Add the line in CMakeLists.txt right below the version checking command.
Set(ALL_TARGETS "" CACHE INTERNAL "")
Override Add_library() and Add_executable() methods. If there's any other target, override it as well. Add the lines below at the end of CMakeLists.txt file.
function(Add_library NAME)
Set(ALL_TARGETS ${ALL_TARGETS} "${ARGN}" CACHE INTERNAL "ALL_TARGETS")
_add_library(${NAME} ${ARGN})
endfunction()
function(Add_executable NAME)
Set(ALL_TARGETS ${ALL_TARGETS} "${ARGN}" CACHE INTERNAL "ALL_TARGETS")
_add_executable(${NAME} ${ARGN})
endfunction()
Create custom target that will execute all the things you want to do after building. In this example I just print some information on screen. Add it followed by the above.
add_custom_target(BUILD_SUCCESSFUL ALL
DEPENDS ${ALL_TARGETS}
COMMAND ${CMAKE_COMMAND} -E echo ""
COMMAND ${CMAKE_COMMAND} -E echo "====================="
COMMAND ${CMAKE_COMMAND} -E echo " Compile complete!"
COMMAND ${CMAKE_COMMAND} -E echo "====================="
COMMAND ${CMAKE_COMMAND} -E echo ""
)
Tada!

Monitoring URLs with Nagios

I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
mkdir /usr/lib/nagios/libexec/check_url_status/
downloaded both check_url_status and utils.pm
Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in #INC (#INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?)
Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
define service{
host_name myers
service_description URL: my-url.com
check_command check_http_url!http://my-url.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
My Command Definition:
define command{
command_name check_http_url
command_line $USER1$/check_http -I $HOSTADDRESS$ -u $ARG1$
}
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in #INC (#INC contains:
You can make an script plugin. It is easy, you only have to check the URL with something like:
`curl -Is $URL -k| grep HTTP | cut -d ' ' -f2`
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.

Resources