are google forms privacy-Preserving? - google-sheets

Is google form a Privacy-Preserving way to conduct a survey?
some people are not comfortable with it. Is it because most people have a google account and if they do not go on private mode, they give more information about themselves to google? does google use the responses?

No.
The contents of google forms (which usually feed into google spreadsheets) is shared between the submitters (only their own data, obviously), you as the form owner, and the entirety of google's internal infrastructure.
Google using the data directly would be a really major infraction, just as it would be if they acted on the contents of a gmail account, however, they have plenty of scope to use the information in indirect, less-obvious ways. For example, the data that someone submits in a form could be used on other sites for ad targeting. Google does this in gmail; if someone sends you an email about something, you can expect to see ads on that subject both within gmail and on other sites. To be fair, they may have stopped that particular practice, but the wider point is that you really can't tell.
"Private mode" is irrelevant in this case; it gives very little protection to start with, and if a form requires you to be logged in to a google account, they know exactly who you are anyway.
On top of this you have the problems caused by the Schrems II judgement that effectively made it illegal to store any personal data (in the GDPR sense) in the US about people in the EU. Prior to this judgement, Google relied on the Privacy Shield arrangement and "Standard Contractual Clauses" (SCCs) to allow this. Privacy Shield is simply dead, and while SCCs are valid in general, they are not usable in the US (though both Google and Facebook have been trying to gaslight to the contrary) because the ongoing lack of US federal privacy laws and the persistent overreach of US security agencies renders it impossible to make their claims valid. This is unlikely to change in the near future.

Related

Privacy notice on google sheets published

Is it possible / recommended to add a GDPR notice in a shared google sheets, which is published to web for those who hold the link?
The data contained are a live timetable of arrivals/departures of vessels, shared among stakeholders in the port, not requiring sign up unlike other services which will also charge subscribers. I don't see any protected data inside the sheet and is not sharing any of them.
I was thinking of adding a link in the first row with a proper policy related, but in fact I don't now what to guarantee since this service is one way only.
It's not really one-way. It might be read-only, but everyone who visits any google doc is identifiably & persistently tracked by google, and that data is used to target ads, so yes, any use of google docs should carry such a warning, though it should really be google itself issuing that warning rather than you.

Inviting event attendees programatically on iOS 10

I've been using Stackoverflow for about 5 years now, and haven't felt the need to ask a single question yet, I've always found the answer i needed through previous threads. That just changed and I have a question that I really can't figure out. And it sounds so easy to do.
So the question is; how do you invite attendees, or reply/decline to calendar events on iOS under iOS 10? And please, no we don't want to bring up an EKEventViewController. We'd like to do this in our own UI. Under iOS 9 this was possible through just forcing EKAttendees objects in to the EKParticipants array with setValueForKey:. But under iOS 10 this produces an error saying 'Attendees can't be modified'.
I have used a Technical Support credit with Apple and got the reply that this was not possible. It is not possible using their APIs.
The closest to an answer i've got is to use IMIP (https://www.rfc-editor.org/rfc/rfc6047#section-2.2.1). If that's the way to go, could someone help me along on how to actually set that up? I'm not well versed in back-end development, I'm all front-end so I wouldn't really know where to start.
There also seems to be some CalDav servers on GitHub (https://github.com/mozilla-b2g/caldav) but I'm not sure how good they are, or exactly what you need to set one up.
So basically, is there anyone who could give a childs explanation to just how the heck we can send nice invites to calendar events. And if there are different solutions for Google, Apple accounts (obviously under the hood, but implementation-wise) that would be very helpful to know to.
Is this something that requires a ton of implementation on our own servers or is there some reliable service to use? That would be ideal. Maybe you should build one and you got at least one customer here :-)
Appreciate any help!
You cannot modify attendees using EventKit, but Apple already told you that:
I have used a Technical Support credit with Apple and got the reply that this was not possible. It is not possible using their APIs.
The hack with accessing the internal objects using KVC was, well, a hack and not documented API. No surprise they killed that.
So how do calendar invites work. That in itself is a very complex topic (consider delegation, resource booking like rooms, etc etc). There is a whole consortium which works on that (CalConnect), they also have a broad overview: Introduction to Internet Calendaring and Scheduling.
If you are serious into scheduling/calendaring software, it may make a lot of sense to join CalConnect for their interop events etc.
But you wanted a 'childs explanation'. I can't give that, but a short overview.
iTIP
iTIP is a standard which defines how scheduling messages flow, e.g. that you send a message to your attendee, your attendee responds back with accept/decline, what happens if a meeting is cancelled and all that.
It does NOT however specify how those messages are transferred. It is just a model on how the message flow works between the organiser and the participants.
Most 'big' calendaring systems (Exchange, Google, CalDAV servers like iCloud) use iTIP or at least something very similar.
iMIP
iMIP is a standard which defines on how to exchange iTIP messages using email. Say if you invite someone using iMIP, you'll send him a special email message with the iCalendar payload containing the invite. If your attendee accepts, his client will send back another iCalendar payload via email containing that.
iMIP is supported by a lot of systems and was, for a long time, pretty much the only way to exchange invitations between different systems (say Outlook and Lotus Notes).
However: the iOS email client does NOT support iMIP (unlike macOS or Outlook). So if someone sends you an iMIP invite to your iOS device, you won't be able to respond to that. (reality is more complex, but basically it is like that)
CalDAV
CalDAV is a set of standards around calendars stored on a server. Many many servers support CalDAV. E.g. iCloud uses CalDAV. Yahoo, Google, etc all support CalDAV. The important exception is Exchange, which doesn't support it.
In its basic setup CalDAV just acts as a store. You can use HTTP to store (PUT) and retrieve (GET, etc) events and todos using the iCalendar format.
In addition many CalDAV servers (e.g. iCloud) do 'server side scheduling'. That is, if you store an event to the server which is a meeting (has attendee properties), the server will fan out the invitations. Either internally if the attendees live on the same server, or again using iMIP.
Exchange
Exchange supports iMIP but not CalDAV. You usually access it using one of its own web service APIs, e.g. ActiveSync or Exchange Web Services. I'm no expert on them, but I'm sure that they allow you to create invites. Exchange&Outlook have an iTIP like invite flow.
etc
Is this something that requires a ton of implementation on our own servers or is there some reliable service to use?
This really depends on your requirements and needs. Do you need to process replies or just send out generic events?
If you want to host a calendar store, it probably makes sense to use an existing CalDAV server.
Calendar invitations are a very complex topic and you need to be very specific on your actual requirements to find a solution. In general interoperable invitations in 2017 are still, lets say 'difficult'.
P.S.: Since you've been using StackOverflow for about 5 years now, you should know that this question is too broad for this thing.

Some general Twitter4J questions

I'm trying to do a write up of Twitter4J for part of a uni project, but I'm getting hung up on a few things. From the Twitter4J api:
void sample()
Starts listening on random sample of all public
statuses. The default access level provides a small proportion of the
Firehose. The "Gardenhose" access level provides a proportion more
suitable for data mining and research applications that desire a
larger proportion to be statistically significant sample.
This implies that by default, a "default access" is provided to the stream, but another type of access, "Gardenhose access" is available. Is this correct? And if so, how do you access the higher Gardenhose access?
I'm asking as I've seen some answers on SO suggest that there is only one level of access - the Gardenhose, and I'm trying to clear this up once and for all.
In addition to this, I would like a reference (if possible) to the number of tweets the sample stream allows access to. I've read lots of people cite 1% for "default access" and 10% for "gardenhose access" - but I can't find this anywhere in the API.
So to sum up, two questions:
Does the sample stream have a "default access" and a "gardenhose access", or just one of those?
How much of the Twitter firehose stream can these levels of access gain?
If replying, please have links to reference-able API where possible.
The gardenhose is different from the default sample stream, you would have had to request access from Twitter in order to use it.
However, I am not sure if Twitter still allows access to the gardenhose, or even if it still exists. It seems the current mechanism may be to use one of Twitter's preferred data partners:
Using the Streaming API?
Every Twitter account can connect to a small sampling of the Streaming API. Accounts that need increased access for data gathering or analytical reasons should check out our preferred partners page.
(source)
It may be different for students or educational instutions and that the gardenhose is still available to you. Previously you would have to either e-mail api-research#twitter.com or you could use the following form, but I have no idea if these methods work still - the post is quite old.
As for the percentage of Tweets that the default sample stream allows access to, the best reference I could find was a comment made by a Twitter employee on the developer forums - emphasis mine:
I would recommend just using the 1% sample stream from https://stream.twitter.com/1/statuses/sample.json that you can connect to with your Twitter account. It's unlikely that you'll be in a situation where you can access all of the data and will have to make do with a sample. At about 230 million tweets a day, you'd still be theoretically getting 2.3 million tweets a day.
(source)
Although, again this is an old post.
Regarding the firehose stream, as specified by the documentation you need to be granted permission to access it, I believe very few people have full access to this stream:
GET statuses/firehose
This endpoint requires special permission to access.
Returns all public statuses. Few applications require this level of access. Creative use of a combination of other resources and various access levels can satisfy nearly every application use case.
Overall documentation is scarce on the different access levels and what they offer, I suggest contacting Twitter directly to discuss your requirements or contacting one of their data partners.
Apologies if this wasn't as concrete as you would have liked, good luck with your research.

business listing search apis

I would like to include local business address/phone numbers into my site.
Does anyone have thoughts on using google local search api vs. twitter's geo api vs. purchasing a directory listing?
Mainly depends on your site and needs (real time, offline..).
Google local gives very good results, the best from my experience (compared to other apis).You should check the terms of service of each service. If I remember correctly, google doesn't allow using it's local api if you site charges users for money.
Also, I think google TOS limits you to client side usage, but you should read the TOS to see if it's true.
Haven't tried the twitter geo api too much, but I remember it didn't fit my needs.
Purchasing a directory listing is not cheap. Again, depends on your needs; do you need US business listings? World wide? If you want US businesses, the leading companies for purchasing a DB of listings are: localeze, infousa, acxiom.
Besides Google Local Search (which actually has been deprecated), there's now SimpleGeo Places, which is free for low volume use and without restrictive terms of service. I don't work for them.
Could also use the Google Places API (which has not been deprecated) using the instructions here.

How do search engines see dynamic profiles?

Recently search engines have been able to page dynamic content on social networking sites. I would like to understand how this is done. Are there static pages created by a site like Facebook that update semi frequently. Does Google attempt to store every possible user name?
As I understand it, a page like www.facebook.com/username, is not an actual file stored on disk but is shorthand for a query like: select username from users and display the information on the page. How does Google know about every user, this gets even more complicated when things like tweets are involved.
EDIT: I guess I didn't really ask what I wanted to know about. Do I need to be as big as twitter or facebook in order for google to make special ways to crawl my site? Will google automatically find my users profiles if I allow anyone to view them? If not what do I have to do to make that work?
In the case of tweets in particular, Google isn't 'crawling' for them in the traditional sense; they've integrated with Twitter to provide the search results in real-time.
In the more general case of your question, dynamic content is not new to Facebook or Twitter, though it may seem to be. Google crawls a URL; the URL provides HTML data; Google indexes it. Whether it's a dynamic query that's rendering the page, or whether it's a cache of static HTML, makes little difference to the indexing process in theory. In practice, there's a lot more to it (see Michael B's comment below.)
And see Vartec's succinct post on how Google might find all those public Facebook profiles without actually logging in and poking around FB.
OK, that was vastly oversimplified, but let's see what else people have to say..
As far as I know Google isn't able to read and store the actual contents of profiles, because the Google bot doesn't have a Facebook account, and it would be a huge privacy breach.
The bot works by hitting facebook.com and then following every link it can find. Whatever content it sees on the page it hits, it stores. So even if it follows a dynamic url like www.facebook.com/username, it will just remember whatever it saw when it went there. Hopefully in that particular case, it isn't all the private data of said user.
Additionally, facebook can and does provide special instructions that search bots can follow, so that google results don't include a bunch of login pages.
profiles can be linked from outside;
site may provide sitemap

Resources