Problem
I'm trying to provide my users with an alternative to purchasing my iAP by allowing them to share that they're playing the game via facebook.
However, when the composer view controller loads the content is editable by the user. Which, for profitability sake is a bad thing. They could remove the entire message and still receive the perk that they receive for sharing. Thus, ruling out that way of marketing.
I'm curious as to two solutions.
Solution One
Force Read-Only ?
Solution Two
Cancel the sharing and display an error message if the sent message is not equal to the initial text/images.
Also, if it is not possible for them to remove the image and/or url then I don't really have a problem with them adding their own text. However, if they can remove the image/url then there is an issue.
Thank you for reading.
Restricting/enforcing what to share by the user, in any way, is not allowed in the Facebook Platform Policy. See also point 2 of: https://developers.facebook.com/policy#control. You can't make the share dialog read-only and you should not check if they shared the content you have provided.
With the second solution; you might also be hitting a policy restriction. You should not incentivize people to share in order for these kind of promotions. See rule number 5: https://developers.facebook.com/policy#properuse. This might be an more difficult issue though, policy wise.
You can let people share an open graph object; either generated from your app or directly one that you (or FB) is hosting, with an open graph url. For that, see https://developers.facebook.com/docs/sharing/opengraph and https://developers.facebook.com/docs/sharing/best-practices.
Related
[Disclaimer: I'm not sure if this kind of question is accepted here as it is about a piece of software deployed already. Rest assured I didn't drop any confidential information. Also do tell me if I violated any rules in SO by posting this so I can take it down immediately]
I have a working Learning Management System web application and I recently received a bug report about a button not showing. After investigating, I have proved that the user was not using the web app as intended. When taking an exam, he was opening multiple tabs to exploit the feature that informs him whether the answer was correct or not. He then will use this information to eliminate the wrong answers and submit all the right answers in another tab/window.
I'm using Rails 4.2. Is there a way to prevent multi-tab browsing? I'm thinking like if a user is signed in and he attempted to open a new tab of the webapp, he should see something like "Please use one tab" and all the features/hyperlinks/buttons are disabled.
Here's a screenshot of how I proved he was using multiple tabs. Notice that there are multiple logs of the same attempt # because the current implementation allows saving a study session and resuming later (this is the part that's exploited). The opening of multiple tabs searches for the most recent attempt session and continues from there. This is also the reason why most of the sessions don't have a duration value -- the user only finishes a study session for one tab (by clicking a button that ends the study session). The system cannot compute for the duration because the other sessions don't have an end timestamp.
-
This is what a single-tab user looks like:
This is more of an application misuse issue more than a bug.
You should add protection not only from multi tab, but for multi browsers aw well, so it can't be purely FrontEnd check.
One of the solutions could be using ActionCable to check if a user has an active connection already and then act accordingly.
Another, for example, generate a GUID in JS and pass it with every answer. If its different from previous answer, it means user opened a new window.
But of course the solution would depend on your current architecture, without knowing how do you currently organise client-server communication it's hard to give exact and optimal solution.
I found an answer here. I just placed this js in the application view to prevent any extra instance of the website.
Thanks for everyone who pitched in.
I'm developing a little app in which users can create their own content. Most content is taken from a MYSQL db and images from AWS S3 and brought to the front through regular jquery/php/html. All content a user has created is private and can only be accessed via a login. However, I would like the users to be able to send selected parts of their content to other users and people via links which they can click on and see that selected content. Much like sending a link to a users image on Facebook to someone outside the community.
I have never done this before and have no idea in how to even begin. If anyone could point me in the correct direction it would be highly appreciated. I have searched around about this but don't really know what to search for.
If this is against SO's rules (as it's a general question about a topic and not specialized enough), please let me know and I'll remove it.
I think the best way to achieve this is to have a private bucket in which each user has distinct permissions on his own folder, and be the only one that can access it. This is something you should probably implement on the application level - allowing only the app to access the bucket, and denying every other type of access.
For the public content - you should create a different bucket which will be publicly available for everyone, and move all public content there.
These are my 2 cents, might be better practices in the field, but IMO it's a nice solution which is not to complicated.
I currently work at a school and have an idea to create an app that allows students to contact a grown up (for example, the principle) anonymously through an app. The app would quite simply consist of a contact form. I am trying to find out the best, and easiest way to achieve this without setting up servers with a separate API. Does anyone have a suggestion on how to achieve it? Is there any way to set up an e-mail form with a pre set recipient and a built in sender-account? Please guide me in the right direction.
You would need to implement an SMTP client. You can use open source code like skpsmtpmessage
It's likely that their example app could be your solution.
Your biggest problem will be the deployment. You definitely need to pay an $99/y developer account and add all the students device ID's to your account (with a maximum of 100 devices/y) or register all of them as beta tester (I don't know the limitations).
Probably this isn't doable so easily, as it seems you don't have iOS developing experience so far. Maybe you can find something on the app store that works with self hosted databases. But you definitely need to host some kind of webApp/API.
You may want to give Appygram a try to handle the back-end if you are able to set up the contact form itself. While it's a separate hosted API, at least you don't have to build/manage it.
Appygram is a free web service that would allow you to configure all the details such as which adults could be contacted, their point(s) of contact (i.e. email address), and it would process and send all the submissions for you. All your app needs to do is send a form post request.
A nice thing about having this information outside of the iOS app itself is that you can change the contact details on the fly without requiring an update to the iOS app itself. Whether you use Appygram (which, since I contribute to it, I am slightly biased toward!) or something similar, I would say that since this is for students, I would recommend a solution that would allow you to update your configuration without requiring app updates.
Finally, I'd second what Julian said. The challenge here could be with deployment. One possible alternative would be to make this a mobile-friendly web page accessible only via student login or on the school network (or both). Would probably be easier development-wise and wouldn't require installs nor the hurdles that Julian described with device registration, etc. And, Appygram would still work with this setup as well.
Good luck!
I need to develop an application which should help me in getting all the status,messages from different servers like Twitter,facebook etc in my application and also when i post a message it should gets updated in all the services. I am using authlogic for authentication. Can anyone suggest me what gems/plug-ins i can use..
I need API help to get all the tweets/messages to be displayed in my application and also ways to post the messages to the corresponding services by posting it from my application. Can anyone help me from design point.
Walk through what you'd want to do in your head. Imagine the working site, imagine your webapp working before you start. So your user logs in (handled by authlogic) and sees a textbox called "What are you doing right now?". The user fills in a status message and clicks "post". The status message appears at the top of their previously posted messages.
Start with the easy part. Create a class that posts to two services. Use the twitter gem and rfacebook to post to two already defined services. In the future, you'll want to let the user associate services to their account and you would iterate through the associated services and post the message to each. Once you have this working, you can refactor or polish the UI a bit to round out this feature. I personally would do the "add a social media account to my profile" feature towards the end.
Harder is the reading of the data (strangely enough) because you're going to have to figure out how to store it. You could store nothing but I suspect you'd run into API limits just searching all the time (could design around this). I would keep a little cache of posts associated to the user's social media account. In this way, the data model would look like this:
A user has many social media accounts.
A social media account has many posts. (cache)
Of course, now you need to schedule the caching of the posts. This could be done manually, based on an event (like when they login) or time based. So when the update happens, you load up the posts for that social media account and the user will see the posts the next time they hit the page. For real-time push to the client's browser while they stare at the screen, use faye (non-trivial) and ajax to pull the new posts to the top of the social media stream view.
The time based one is tricky because you'd either have to have a cron job run or have rails handle it all with a gem like clockwork. But then you have to leave rails running. I've also solved this by having a class in /lib do all the work and a simple web call kicks off the update. But it wasn't in a multi-user use case. So that might not work. In any case, you'll want to have some nice reusable code for these problems since update requests can come from many different sources.
You'll also have to deal with the API limits. When pulling down content from twitter, you won't get everything. That will just have to be known by the user or you'll have to indicate a "break in time" somehow.
The UI should be pretty easy (functionally anyway), because you know which source the post/content is coming from. It'd be easy to throw a little icon next to the post to display which social media site it's coming from.
Anyway, good luck, sounds like a fun project.
I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings?
edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file.
I appreciate any help!
You can prevent some malicious clients from hitting your server too heavily by implementing throttling on the server. "Sorry, your IP has made too many requests to this server in the past few minutes. Please try again later." In practice, though, assume that you can't stop a truly malicious user from bypassing any throttling mechanisms that you put in place.
Given that, here's the more important question:
Are you comfortable with the information that you're making available for all the world to see? Are your users comfortable with this?
If the answer to those questions is no, then you should be ensuring that only authorized users are able to see the sensitive information. If the information isn't particularly sensitive but you don't want clients crawling it, throttling is probably a good alternative. Is it even likely that you're going to be crawled anyway? If not, robots.txt should be just fine.
It seems like you have 2 issues.
Firstly a concern about certain data appearing in search results. The second about malicious or unscrupulous user harvesting user related data.
The first issue will be covered by appropriate use of a robots.txt file as all the big search engines honour this.
The second issue seems more to do with data privacy. The first question which immediately springs to mind is: If there is user information which people may not want displayed, why are you making it available at all?
What is the privacy policy for such data?
Do users have the ability to control what information is made available?
If the information is potentially sensitive but important to the system could it be restricted so it is only available to logged in users?
Check out the Robots exclusion standard. It's a text file that you put on your site that tells a bot what it can and can't index. You will also want to address what happens if a bot doesn't honour the robots.txt file.
robots.txt file as mentioned. If that is not enough then you can:
Block unknown useragents - hard to maintain, easy for a bot to forge a browser's (although most legitimate bots wont)
Block unknown IP addresses - not useful for a public site
Require logins
Throttle user connections - tricky to tune, you will still be disclosing information.
Perhaps by using a combination. Either way it is a trade off, if the public can browse to it, so can a bot. Be sure you don't block & alienate people in your attempts to block bots.
a few options:
force the user to login to view the content
add a CAPTCHA page before the content
embed content in Flash
load dynamically with JavaScript