I am completely dependent on Rest Kit for my app for network calls. I want to see the logs of how much
1) Time taken by each API to get a response
2) Size of Request/Response Payload
3) URL of the API
Is there any way I can enable such logging in Restkit. My app is calling like 50-60 API an dI don't want to dig into entire code base an add manual logs. Also I don't want to use network profiling tool since I will be tracking this data when an actual user is using the application.
Cant also use any third party paid tool so want to log these values in application database.
RestKit does have a log you can enable, but that isn't what you want to do if you plan to actually release this. It also writes to the log, not a value you can actually process and save.
Your likely best option is to subclass the RKObjectManager and intercept the requests that are being placed and the NSURLRequests which are being generated.
Related
I am currently creating an iOS application with Swift. For the database I use Firebase Realtime Database where I store among other things information about the user and requests that the user sends me.
It is very important for my application that the data in the database is not corrupted.
For this I have disabled data persistence so that I don't have to store the requests locally on the device. But I was wondering if it was possible for the user to directly modify the values of the variables during the execution of my application and still send erroneous requests.
For example the user has a number of coins, can he access the memory of the application, modify the number of coins, return to the application and send an erroneous request without having to modify it himself.
If this is the case then is it really more secure to disable data persistence or is this a misconception?
Also, does disabling access to jailbroken devices solve my problems? Because I've heard that a normal user can still modify the request backups before they are sent.
To summarize I would like to understand if what I think is correct? Is it really useful to prevent requests to save locally or then anyway a malicious user will be able to modify the values of variables directly during the execution and this without jailbreak?
I would also like to find a solution so that the data in my database is reliable.
Thank you for your attention :)
PS : I also set the security rules of the db so that only a logged in user can write and read only in his area.
You should treat the server-side data as the only source of truth, and consider all data coming from the client to be suspect.
To protect your server-side data, you should implement Firebase's server-side security rules. With these you can validate data structures and ensure all read/writes are authorized.
Disabling client-side persistence, or write queues as in your previous question, is not all that useful and not necessary once you follow the two rules above.
As an added layer of security you can enable Firebase's new App Check, which works with a so-called attestation provider on your device (DeviceCheck on iOS) to detect tampering, and allows you to then only allow requests from uncorrupted devices.
By combining App Check and Security Rules you get both broad protection from abuse, and fine-grained control over the data structure and who can access what data.
I am wondering about what the best way to keep users in sync with each other in a social network is. The concerned stack is an iOS app with a NodeJS backend. Let me give you an example:
Say X and Y are friends on a social network. Y's posts appear in X's feed, and as such, Y is cached somewhere on the X's phone. This morning, Y decided to change profile pictures however. Everything is well, the new picture is uploaded to the server, but how do we go about letting X know about the change of profile picture?
My possible solution: Create a route /<UID>/updates that contains a stack of "cookies" which lets the user know what and who changed since the last time they made a GET request to the route.
This seems elegant enough, but what worries me is what happens on the client side (am I supposed to make a GET request every 2 minutes during my app's uptime?). Are there any other solutions?
One solution is indeed to poll the server, but that's not very elegant. A better way is to make use of websockets:
WebSockets is an advanced technology that makes it possible to open an interactive communication session between the user's browser and a server. With this API, you can send messages to a server and receive event-driven responses without having to poll the server for a reply.
They are a 2-way connection between client and server, allowing the server to notify the client of any changes. This is the underlying technology used in the Meteor framework for example.
Take a look at this blogpost for an example of how to use websockets between an iOS client and a NodeJS backend. They make use of the open source SocketRocket iOS library.
Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.
It seems that Firebase iOS implementation doesn't support offline caching of the client model. What this means in practice that:
For Firebase apps requiring an authentication, you need to first authenticate and wait Firebase finish the login (check the user identity, open a socket, etc.) before you can start moving the data. This will require 1-8 seconds (usually 2-5) depending on the network conditions, at least here in Finland.
After authenticating, Firebase first downloads the initial set of data and initializes the client cache. The time to perform this depends on the size of the data you add listeners for, but it's usually quite fast.
The problem here is that if you're using Firebase to implement, for example a messaging app, you'd most likely want to show the user a previously cached version of the message threads and messages, before the actual connection with the backend server is established.
I'd assume the correct implementation for this would need to handle:
The client-side model <-> Firebase JSON mapping (I use Mantle for
this)
Persisting the client-side model to disk (manual implementation using NSKeyedArchiver, or Core Data or such?)
Synchronizing the on-disk model with the Firebase-linked model in memory, when the connection is available (manual implementation?)
Has anyone come up with a solution (own or 3rd party) to achieve 2) and 3)?
It seems Firebase has solved this problem since this question was asked. There are a lot of resources on Offline Capabilities now with Firebase, including disk persistence.
For me, turning on persistence was as simple as the following in my AppDelegate:
Firebase.defaultConfig().persistenceEnabled = true
Assuming your app has been run with an internet connection at least once, this should work well in loading the latest local copy of your data.
There is a beta version of this technology within the client for iOS described here: https://groups.google.com/forum/#!topic/firebase-talk/0evB8s5ELmw give it a go and let the group know how it goes.
Just one line required for persistence with Firebase in iOS
FIRDatabase.database().persistenceEnabled = true
Can be found here in Firebase Docs
I am to build a web application which will accept different events from external sources and present them quickly to the user for further actions. I want to use Ruby on Rails for the web application. This project is a internal development project. I would prefer simple and easy to use solutions for rapid development over high reliable and complex systems.
What it should do
The user has the web application opened in his browser. Now an phone call comes is. The phone call is registered by a PBX monitoring daemon. In this case via the Asterisk Manager Interface. The daemon sends the available information (remote extension, local extension, call direction, channel status, start time, end time) somehow to the web application. Next the user receives a notified about the phone call event. The user now can work with this. For example by entering a summary or by matching the call to a customer profile.
The duration from the first event on the PBX (e.g. the creation of a new channel) to the popup notification in the browser should be short. Given a fast network I would like to be within two seconds. The single pieces of information about an event are created asynchronously. The local extension may be supplied separate from the remote extension. The user can enter a summary before the call has ended. The end time, new status etc. will show up on the interface as soon as one party has hung up.
The PBX monitor is just one data source. There will be more monitors like email or a request via a web form. The monitoring daemons will not necessarily run on the same host as the database or web server. I do not image the application will serve thousands of logged in users or concurrent requests soon. But from the design 200 users with maybe about the same number of events per minute should not be a scalability issue.
How should I do?
I am interested to know how you would design such an application. What technologies would you suggest? How do the daemons communicate their information? When and by whom is the data about an event stored into the main database? How does the user get notified? Should the browser receive a complete dataset on behalf of a daemon or just a short note that new data is available? Which JS library to use and how to create the necessary code on the server side?
On my research I came across a lot of possibilities: Message brokers, queue services, some rails background task solutions, HTTP Push services, XMPP and so on. Some products I am going to look into: ActiveMQ, Starling and Workling, Juggernaut and Bosh.
Maybe I am aiming too hight? If there is a simpler or easier way, like just using the XML or JSON interface of Rails, I would like to read this even more.
I hope the text is not too long :)
Thanks.
If you want to skip Java and Flash, perhaps it makes sense to use a technology in the Comet family to do the push from the server to the browser?
http://en.wikipedia.org/wiki/Comet_%28programming%29
For the sake of simplicity, for notifications from daemons to the Web browser, I'd leave Rails in the middle, create a RESTful interface to that Rails application, and have all of the daemons report to it. Then in your daemons you can do something as simple as use curl or libcurl to post the notifications. The Rails app would then be responsible for collecting the incoming notifications from the various sources and reporting them to the browser, either via JavaScript using a Comet solution or via some kind of fatter client implemented using Flash or Java.
You could approach this a number of ways but my only comment would be: Push, don't pull. For low latency it's not only quicker it's more efficient, as your server now doesn't have to handle n*clients once a second polling the db/queue. ActiveMQ is OK, but Starling will probably serve you better if you're not looking for insane levels of persistence.
You'll almost certainly end up using Flash on the client side (Juggernaut uses it last time I checked) or Java. This may be an issue for your clients (if they don't have Flash/Java installed) but for most people it's not an issue; still, a fallback mechanism onto a pull notification system might be prudent to implement.
Perhaps http://goldfishserver.com might be of some use to you. It provides a simple API to allow push notifications to your web pages. In short, when your data updates, send it (some payload data) to the Goldfish servers and your client browsers will be notified, with the same data.
Disclaimer: I am a developer working on goldfish.
The problem
There is an event - either external (or perhaps internally within your app).
Users should be notified.
One solution
I am myself facing this problem. I haven't solved it yet, but this is how I intend to do it. It may help you too:
(A) The app must learn about the event (via an exposed end point)
Expose an end point by which you app can be notified about external events.
When the end point is hit (and after authentication then users need to be notified).
(B) Notification
You can notify the user directly by changing the DOM on the current web page they are on.
You can notify users by using the Push API (but you need to make sure your browsers can target that).
All of these notification features should be able to be handled via Action Cable: (i) either by updating the DOM to notify you when a phone call comes in, or (ii) via a push notification that pops up in your browser.
Summary: use Action Cable.
(Also: why use an external service like Pusher, when you have ActionCable at your disposal? Some people say scalability, and infrastructure management. But I do not know enough to comment on these issues. )