Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Is there a logging framework for iOS that could aid developers in diagnosing app crashes?
You may like:
Lumberjack: stable and traditional
"It is similar in concept to other popular logging frameworks such as log4j, yet is designed specifically for Objective-C, and takes advantage of features such as multi-threading, grand central dispatch (if available), lockless atomic operations, and the dynamic nature of the Objective-C runtime."
LibComponentLogging: beautiful and hardcore, used by RestKit
"LibComponentLogging is a small logging library for Objective-C applications on Mac OS X and the iPhone OS which provides conditional logging based on log levels and log components. Additionally, different logging strategies can be used, e.g. writing log messages to a file or sending them to the system log, while using the same logging interface."
NSLogger: fancy with a dedicated visualization OS X App
"NSLogger is a high perfomance logging utility which displays traces emitted by client applications running on Mac OS X or iOS (iPhone OS). It replaces your usual NSLog()-based traces and provides powerful additions like display filtering, image and binary logging, traces buffering, timing information, etc."
I know this post is old but I'm looking for one as well. I found one called Lumberjack, though I haven't tried it yet.
I created a simple logging framework that might help. I'd appreciate any feedback you have. Hope it helps.
Link to Project Page
This previous question seems to overlap. But the bottom line is:
NSLog(#"message");
or:
printf("message");
I have a slightly different need: not only do I want to debug crashes, but I also need to debug other errors (NSError, NSException).
I tried all 3 packages mentioned in IlDan's answer. However, all of them require me to adopt a new way of logging, which may not be compatible with the libraries I depend on. For example, I intended to adopt NSLogger but RestKit, which is an important library in my project, uses LibComponentLogging.
So I ended up with writing a small pod (https://github.com/kennethjiang/Teleport-NSLog) for that. The mechanism is to redirect stderr (where NSLog + all these logging frameworks write messages to) to a backend HTTP server. Now I can debug my app running in user's device just as if it was running in my xcode. :)
For basic logging use NSLog(#"your message here")
If you want more flexible logging look into Lumberjack. It can let you disable logging in production etc etc.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to decompile a iOS 3.1.3 kernel to better understand it, but i'm not sure where to start, But don't be fooled i'm no green horn when it comes to programming.
The kernel is open source so you can view and compile it. iOS 3.1.3 is in the repo but you will also want to look at 3.0.
From Apple:
As the first major computer company to make Open Source development a key part of its ongoing software strategy, Apple remains committed to the Open Source development model. Major components of Mac OS X, including the UNIX core, are made available under Apple’s Open Source license, allowing developers and students to view source code, learn from it and submit suggestions and modifications.
The XNU kernel in its iOS incarnation is not open source, though xnu in its i386/x86_84 (and , for older versions PPC) is.
XNU is built up internally of several layers, with the bottom two - platform expert and ml_* APIs serving as the "glue" to the underlying hardware. This means that without said pieces you could compile the source (with an ARMv7 cross compiler, like the one in the iPhone SDK), but the kernel wouldn't actually boot.
Another difficulty is in the kernel extensions (XNU-speak for "modules"). These are drivers, without which you can't really do much - again, with the kernel not booting since it can't initialize any of its platform dependencies.
Also, contrary to how it may seem, though the iOS and OS X kernels are very similar, there are some subtle differences (which were visible by #ifdef CONFIG_EMBEDDED and #ifdef ARM until after 1699, when Apple realized they were leaking information of use to jailbreakers, and finally used a preprocessor to strip the iOS specific modifications before making it public.
Decompilation is a different matter. It's possible to disassemble and work back, from the kernel image (once decrypted or dumped) through fairly readable assembly (though not to a full source level). IDA and other specialized tools (e.g. jtool) have these capabilities.
There have been at least two projects to get the open source version to compile and boot for ARM. One by Christina Brooks(?) and another by WinOCM. The latter, which has gained notoriety for knowing XNU in its ARM implementation inside out, has eventually been employed by Apple, thereby reducing the chance of any open source implementation ever seeing the light of day.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
As someone who has never developed an iOS app but hopes to soon and has never had a Mac (to date) the whole XCode and process for developing apps was a little lost on me.
To start: things like which languages are supported in development was one area I wasn't sure of:
I've seen C, C++ and Objective-C referenced as the languages used to write the apps. But I've also seen JavaScript` + HTML + CSS and .NET as options and a host of other compiled languages, with people arguing if you can or can't use them.
Another thing I wondered about was Xcode, does it support all the mentioned languages? Or is an IDE built for a specific language such as Cocoa ? If it is, then how would someone use JavaScript for example to write the app?
I'm sure this is a fairly simple answer to Apple users, but I had some struggle trying to relate from a non-Apple background.
Updated
Thanks for the great answers and insight, hopefully other this post will be helpful to others who don't have an Apple / iOS background
All three language alternatives that you mentioned are available to iOS application developers *.
Objective-C/C++ offers a way of making native apps for iOS, you produce machine code that runs on the devices. You use Xcode to develop in these languages.
You can build apps in JavaScript + HTML + CSS because iOS comes with a browser. Apple offers a mode that gives your apps a completely native look and feel, so they do look like first-class citizens.
You can build your apps in C# as well by using Mono Touch. This is different from .NET, although the language is the same, because your code is compiled into binary ahead of time. Although using Mono Touch eliminates the learning curve associated with the new language, you need to go through a fair bit of learning to adapt your knowledge of .NET to a different platform.
* Except Cocoa, which is not a language, but a collective name for Apple's frameworks for developing under OS X and iOS.
iOS' native language is Objective-C. While it's true you can use C++ to make apps (Cocos for example, is mostly written in C++) it isn't the 'native language'.
As for the other languages you mention, while it's possible to create apps using them they won't be 'native' normally relying upon another IDE/Library, phone gap or adobe air for example. Most of these also support cross platform development.
Where I work we also use HTML5 to create a 'faux native' interface/experience.
If you're new to iOS it's worth while checking out Apple documentation/sample code at:
https://developer.apple.com
There is a wealth of knowledge there that should set you on the right path. It ins't however, something I'd recommend if you don't have any experience with object orientated programming.
Coming from a C++ background myself, I didn't find it too difficult but have been working on iOS for about 3 years (on and off) and am only just starting to really become truly fluent in it's processes and conventions.
Hope this helps, let me know if you want to know anything specific.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm in need of a open source solution/library to stream RTSP/RTMP to an iOS Application. I need to build an app that connects to a media server, and opens the provided video stream. I believe there has to be libraries out there, but I have yet to find one that is open source, compiles, actually works, and runs on iOS 5+, iPhone 4+. I do not have a preference, RTMP or RTSP will suffice. Preferably the one with the least amount of work. I have RTSP working on the Android side, but nothing for iOS yet.
This is what I already know from research today -
RTSP
Seems possible using Live555/FFMPEG
MooncatVenture Group - Old FFMPEG, not compatible with ARMv7s (No updates/blogs/commits in over a year)
DFURTSPPlayer - This is a working example.
RTMP
Seems possible using Live555/FFMPEG
A few libraries are out there for data messaging, but that is all
MidnightCoders Project - Does not seem video support is build yet, as Audio is not.
I've never messed with anything video related before, so encoding, frame rate, key frame, chunks, etc... is pretty foreign to me. Right now, it seems building a static binary from Live555/FFMPEG is the only solution to my problem. If so, can anyone give me a simple quickstart guide or links to a blog/example someone has out there? I'm not looking for anything crazy, just a simple
Download This - LINK
Compile it like this - LINK
Place it into X Folder in Xcode
Create X Object
Read Stream API here - LINK
If not, anyone want to point me to a working open source library?
Oh yeah, this happens to be my first iPhone app and first time in Objective-C. Awesome first project, yeah?
DFURTSPPlayer is a working example on GitHub. Will have to double check on licensing issues, but with this it is a good place to start for RTSP.
It seems at this time, the only way to do what I want is to create a static binary to use, from complete scratch. Libavcodec, FFMPEG, and Live555 are all under LGPL. Which means, in order to not make my code open source, I would have to allow dynamic linking, so that my app users have the ability to make modifications to the open source libraries I used, whenever they want. The AppStore does not allow dynamic linking, so I am essentially dead in the water unless I want to write it all from scratch. Which, I definitely do not want to do...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I would like to receive Server-Sent Events in my native iOS app however am not using webkit/Safari. From what I've found, NSURLConnection is poor fit as it chunks response . I've also looked at ZTWebSocket library (obviously nice, but I'm seeking SSE, not web sockets). Would CocoaAsyncSocket be appropriate? Or limited to pure TCP Socket communication?
I have a sneaking suspicion that I am missing something obvious, or there'd be a library or sample for this already. Thanks in advance.
SSE is a HTTP technology in the sense that it is bound to an open HTTP connection. CocoaAsyncSocket are raw TCP/UDP sockets and do not know anything about HTTP. So no, CocoaAsyncSocket won't give you SSE, as you suspected.
I don't know of any standalone implementation of SSE (in the spirit of standalone Websocket implementations), which is maybe what you are searching for. But I don't know either whether that would make sense at all, since SSE is sending messages in form of DOM-events which are most sensible in the context of HTML, as far as I can see.
If all you want to achieve is sending messages to your iOS app and you are free in the choice of technology, raw sockets would do. But Websockets more likely could suit your needs, depending on what you want. Take a look at SocketRocket.
After some more research on this, it's my opinion that the best way to implement Server Sent Events on iOS without WebKit is to use a customized NSConnection/NSRequest toolset. I settled on ASIHTTPRequest . This library allows you to explicitly control persistence attribute on connection object (essential), control data as it is received over the stream, store responses (e.g. in local files), etc.
Not to mention it contains lots of other handy extensions/customizations in the realm of networking (improved Reachability observer, a simplified API for async, a queuing feature, even ability to load entire web pages (CSS, js and all).
On the server side, I'm using cyclone-sse (tornado) and nginx (as a reverse proxy). Pretty exciting, now I can see my SSEs pushed simulataneously to both my iOS simulator and a browser subscriber. Cyclone even handles all the connections and gives me an API which supports simple POST for message pushes (also supports AMQP and Redit)...
Long story short, ASIHTTPRequest was a perfect solution for me.
Try this simple library which is written in Swift:
https://github.com/hamin/eventsource.swift
The API is super simple. It uses NSURLConnection for now.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
What are the HCI challenges of Web 2.0?
Here are a few more:
Clear privacy options
Facebook has repeatedly changed the way it deals with content ownership and privacy. (See here, here and here.) Aside from the obvious PR gaffes, this has also demonstrated the difficulty users have understanding privacy.
Geeks like us are familiar with ideas of inheritance and groups. Heck, many of us work explicitly with permission structures when dealing with files on *nix systems. To most users though, it's not clear who can see what or why.
Service Interoperability
On the desktop we're used to being able to chain together tools to get the outcome we want. A simple example would be dragging image thumbnails from a file explorer to an image editor. We'd expect that to work, but not on the web
The Flock browser goes some way to overcome this shortfall, as does the Google Docs web clipboard, but interaction between web services is still a long way off what we expect from the desktop.
Accessibility
Web 1.0 was primarily text based, so the main accessibility issues were easy to fix: stuff like text as images and tables for layout, which both affect screen-readers used by the blind.
As the content of the web gets richer (more images, video and audio), the chances get larger that someone will be excluded from it. Moreover, making video and audio accessible is much harder than making text or images accessible, so it's much less likely to be done.
Lastly, Web 2.0 introduced a whole new problem for accessibility: dynamic content. How should screen-readers (for example) deal with new content appearing on a page after an AJAX query? WAI-ARIA aims to address these issues, but they still require the web-designer to implement them.
Hope this was useful.
There are plenty as I see it,
Different screen resolutions.
Different hardware capabilities. (mobile; touch; desktop; laptop; soon orientation too.)
Localized content.
Location based.
With HTML5 upcoming, hardware acceleration; native api's; localstorage; offline.