is it possible to making a posting to Craigslist through my own website? - craigslist

What I am trying to do is allow users to making postings to Craiglist through my own website using PHP curl. This is NOT an automated posting system, I just want users to be able to post onto Craigslist and my website at the same time. So far, I've managed to log in using php but I'm still not sure how to post the title, description, contact information, etc. I am not familiar with cURL.

Your question is kind of broad, so I'll answer broadly. Narrow down your question (or post a follow-up) so we can help you better.
Is it possible to making a posting to Craigslist through my own website?
It depends, there are two major ways, but most websites block these so I suspect Craigslist does too.
1. Clientside
Your visitors become visitors of craigslist.
You take the form that you find on craigslist, and host it (the html code) on your site, but with the form 'action' pointed to theirs.
They'll probably block these, based on the REFERER, a session key or something alike.
2. Serverside
Your server acts as a client for Craigslist.
You host the form on your server, and the processing page as well. After you've captured all the input, your server will now act as a client to Craigslist, using indeed for example php curl.
You should try if 1 works, if not, start coding on 2. If you're stuck in a specific part, post a question and we'll help you further.

There is an API available now to make automated posts (one or more) in one request.
http://www.craigslist.org/about/bulk_posting_interface
There are two caveats in your case:
It uses RSS as the request/reponse.
Your users will need to provide their Craigslist user/pass (assuming they have an account).

Related

Reverse engineering website : cannot find the form inputs in the post request

I need to interact with an external website I don't own. This external website requires credentials that I have. My goal is to add a user but the external website does not offer an external API. It looks like they are using Vaadin.
So to add a new user I need to manually fill in a form. Yet I have been searching for the route the "form" takes to post the input I give but could not find any.
Here is my issue : when I look at the HTML source code in browser I cannot see any form tag. The buttons have all the same id "button". When I fill in the form and look at the network tab in the developer tools, in the "parameters" section I cannot see the inputs I just gave although the POST request does appear. The cookies tab does not show the inputs either.
Consequently my questions are : why can't I find the inputs in the POST request and where can they be ?
Please note : this external website is a medical site so I prefer not share the url and they don't offer a mobile app, so there is no mobile API I could reverse engineer.
Any help appreciated :-)
Not stating the Vaadin version makes that a tad harder give an exact
answer, but at the core both the Vaadin 8 and 10+ behave the same way.
And the short answer to your question is: without another entry-point,
like an API, this can not simply be done using just some POST-Request.
Vaadin is not simply a html-form/request/response-html based framework;
it holds the scenegraph on the serverside in a session. All
communication is done via a single endpoint to the server and state
changes only are communicated back to the client.
For what you are after, your best bet is to use test automation
frameworks like selenium, geb, cypress, ...

Craigslist or Kijiji - Is it possible to extract posters email address?

Hi and thanks in advance for helping me with my question.
Is it possible to write a script that would extract the following information when provided with a craigslist or kiji post ie http://toronto.en.craigslist.ca/tor/atq/3346994296.html:
email address (default one provided by craigslist)
items in the post
address of poster
Above 1-3 is information that can be manually obtained but would like to just input a posting or ad ID and be able to extract this info.
The short answer to this question is...
Yes, automatically extracting the info listed from web pages similar to the one provided as example can be done by a relatively simple script.
In general, this activity [of automatically extracting info from web pages] is known as Web Scraping, a particular form of Data Scraping.
There are both off-the-shelf products that can be used (no or little programming involved; the parametrization of the desired pages and desired fields within the pages is specified by way of configuration.), as well as software libraies which can be used in relation with scripting languages such as python or java and which facilitate the parsing of HTML page, and more generally provide support for the various tasks associated with this activity.
Aside from technical considerations, you need to assert the etiquette and legality of performing this kind of scraping. Whereby some data and sites may be explicitly copyright-protected, it is always a good idea to perform big scraping jobs at low traffic hours and by throttling the requests as to not burden the host site unduly. Also many sites may provide an API or data dumps to supply the same info in a simpler and more controlled fashion.

What's the best service to use for filtering out spam/abuse/malware links for a link shortening webapp?

I have two services - Lincr and LinkBunch. Lincr is a plain jane URL shortening service, while LinkBunch lets you shorten multiple links into one link. I've had too much spam posted into the services, so I had to shut down Lincr. Now, even LinkBunch seems to be facing the same problem, and it's been disabled by my web host for that reason.
I can't keep shutting down sites like this because of bad links being posted, so I need a malware-filtering API that I can use to filter out the links as and when they are posted.
There are services that let me download an entire bunch of bad links to check against, but instead, I'd prefer doing a live API call on a per-link basis. What can I use for that?
Finally, what's the best malware filtering service out there?
Lincr is down. On LinkBunch, where is your Captcha?
On either site, do you limit the number of posts by IP? Do you use a delay in your response? What about using hidden fields to reduce spam (http://www.reviewmylife.co.uk/blog/2008/05/30/hidden-field-spam-trap-for-phpformmail/)?
I know I'm dodging the question a bit, but you should at least take basic anti-spam measures before resorting to API calls. Even APIs will still fail for newly-hacked / newly-spammed sites.

Some questions about dotnetopenauth

I have a couple outstanding questions mainly reguarding twitter and facebook
In the FacebookGraph class there are properties such as Id,name,etc. I am wondering how do I add to this list? Like what happens if I want a users hometown? I tried to add a property called hometown but it always is null.
What should I store their id(1418) or the whole url(http://www.facebook.com/profile.php?id=1418) for lookup later in my db to grab their data and to see if they have an account with my site?
Is it actually good to use this id as it seems like it is common knowledge. Can't someone just find the profile id or whatever and do a fake request on my site?
how do you setup dotnetopenauth to deal with the case when a user goes to facebook and deletes access to my website. I know you can send a deauthorization code to your site and then delete their account but I don't know how to do that through dotnetopenauth
Twitter
Is it possible to do number 4 with twitter?
Ajax
Is it possible to make the openid stuff ajax? I don't see a sample anywhere in the dotnetopenauth samples.
I'm no pro at Facebook. But the FacebookGraph class is in the ApplicationBlock, which ships as source and is fully intended for you to customize or extend within your own application. Hopefully people more familiar with Facebook in particular, or the Facebook docs, can help you with those questions.
Since Facebook is not OpenID, what you store whether ID # or the whole URL, is less critical. People should not be able to just craft requests to log in as others because your site should be verifying signatures, etc. If you're using DotNetOpenAuth appropriately this is probably being done automatically for you. But without seeing your code it can't be said for sure.
Assume the id is common knowledge. It certainly isn't a long random number so anyone can guess it. The ID must be accompanied by a signature that verifies that Facebook sent the ID, just now, for you.
I suspect the deauthorization code isn't going to be relevant to DotNetOpenAuth -- that's probably just some URL that you respond to. But again, I haven't read the FB docs on this.
Here's the real answer I can give you. Yes, OpenID works with AJAX reasonably well. You can see some samples of this at nerddinner.com or a sample of a blog post comment system. The most complete AJAX demonstration for standard login may be in the web forms or MVC project templates available on the Visual Studio Gallery.

How would I find all the short urls that link to a particular long url?

Basically I want to know how many people have tweeted a link to a url, but since there are dozens of link shortener out there I don't see any way to do this without having access to all of their url maps. I found a previous question here but it was over a year old and didn't have any new answers.
So #1, does anyone know of a service/API for doing this?
And #2, can anyone think of a way to accomplish this task other than submitting the long url in question to all the popular link shortening sites?
ps- I'm also open to comments about why this is impossible or impractical.
You could perform a Google search (or the equivalent via API) for any pages that link to your page. This is done with the link: keyword. So if you're trying to figure out how many people link to www.example.com (regardless of whether it's through a link shortner URL), then you would just do a Google search for link:www.example.com.
e.g.: http://www.google.com/search?q=link:www.example.com
Note that this will only find pages that have been indexed, so pages that haven't been crawled, or pages that get crawled infrequently, will not show up in the results until a later date (if at all).
Since all sites have different algorithms for shortening the URLs, and these are different sites that most likely do not share their data with each other, how can you hope to find all of them in a single or small number of queries?
All you can do is brute-force it, and even then this might not be any good if a site is content to create a new value for the same long-form URL (especially if you send a different long-form URL that maps to the same place, like http://www.stackoverflow.com/ rather than http://stackoverflow.com/).
In order to really get this to work, there would have to be a site that ALREADY automatically collects all of this information from every site, which the URL shortening sites voluntarily call. And even if you wrote such a site, that doesn't account for the URL-shortening sites already out there who already have data!
In short, I do not see how this is remotely possible, unless I'm wrong about there being such a database somewhere out there.
So months after asking this question I came across a solution to a similar question, that is how to tell how many times a link has been shared on facebook. The solution, via a simple new API call:
http://graph.facebook.com/http://stackoverflow.com
returns the following json data:
{
"id": "http://stackoverflow.com",
"shares": 1627
}

Resources