Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am currently building a crowdfunding web application with Rails and in order to send registration confirmations, password resets or just newsletters I need a mail service.
Currently I am using a regular Gmail account, is doing so advisable? And to which service should I switch once business gets going?
It's fine as long as you don't have too many mails to send out. Gmail has limits on the amount of mail that you can send and receive.
You can find it here: https://support.google.com/a/answer/166852?hl=en
Other than the limits, there is not much of a problem using Gmail. I can't answer the next part of your question unfortunately as I don't have much of an experience in that area.
I've found using GMail to be fairly reliable. But you do need to be aware of throttling. This probably won't be a problem for your registration confirmations or password resets... but may be someday for your newsletters. I forget the specifics, but if you send out more than about one thousand emails per hour (see link in #Vinay's answer for specifics) they start to get throttled -- which lasts for a period of tiem during which any emails sent simply don't get sent.
Despite GMail's decent reliability, you should consider using a resque, skidekiq, or delayed-job service for the actual sending of the emails. This is jsut a good policy for all external services and GMail is no different, in the end. Using a background job for your mail sender allows you to retry an email send until it works. This helps when either the Gmail SMTP service goes down or when you have a bug in your email sending code.
The question about what service to switch to when you outgrow GMail is very much a matter of opinion. Which is the type of question we try to avoid on Stack Overflow (and the reason why your question has a close vote on it already).
I am a big fan of SendGrid, especially if you are are running on heroku. It is simple to add to a rails app on heroku. https://addons.heroku.com/sendgrid
It will most likely be free when you launch, the free version supports up to 200 emails/day (if you get beyond that, then you are doing well and can afford to pay for it). It also some nice tools that help you identify which emails aren't being delivered and why.
Related
Woo-hoo! Thanks that we have mailhog for reliable Docker-based testing environments for outbound mail ... allowing us to easily debug mail-sending services without bothering production users ...
... but what about inbound emails?
On one of our sites, users can send an email to a designated support-address and their email will automagically be entered into a ticket system. (As you can well imagine, I am now tasked with replacing that system.) So, is(n't) there already a sort of "reverse mailhog" system, which would allow me to define an "incoming-mail server" which I could from time to time say "had just received thus-and-so message?"
Since this now seems to me to be "just as obvious a requirement as the one 'mailhog' so adroitly solved," I now ask the community – am I overlooking something wonderful that I literally don't yet know about? (Pretty-please tell me in your reply that the answer is 'yes.')
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I joined a Slack team and now I want to play with the bots there. But there seem to be lots of different ways and they all involve some server with API.
Isn't there an easy way to write a script (is that a bot) for end users? I write a file, load it into the slack app and it works?
My first idea (just to try it out) was to respond to certain keywords automatically from my own account.
There are four types of custom Slack integrations:
Incoming webhooks: your code sends an HTTP POST to Slack to post a message
Custom slash commands: Slack sends your code an HTTP POST when someone says /<whatever>
Outgoing webhooks: roughly the same as slash commands, but they can respond to any word at the beginning of a message
Bot users: your code connects to Slack via a WebSocket and sends and receives events
In all of these cases, you need code running somewhere to actually do the work. (In the case of the bot, that code can run anywhere with network connectivity. In the other cases, you'll need a server that's listening on the internet for incoming HTTP/HTTPS requests.)
Slack itself never hosts/runs custom code. I'd say https://beepboophq.com/ is the closest thing to what you're looking for, since they provide hosting specifically for Slack bots.
Another option for things like slash commands is https://www.webscript.io (which I own). E.g., here's the entirety of a slash command running on Webscript that flips a coin:
return {
response_type = 'in_channel',
text = (math.random(2) == 1 and 'Heads!' or 'Tails!')
}
If you want to do something really basic, you may consider this service
https://hook.io/
you can set up a webhook there using the provided url + you token (you can pass it as env variable) and code simple logic
I hope it helps
There are plenty of solutions for that.
You can use premade solutions like:
https://hook.io
https://www.zapier.com
https://www.skriptex.io (disclaimer: that's my app)
Or you can setup a hubot instance, and host it by yourself.
Their API is also good, and you can just create a Slack app, bind it to some commands, and it will interact with one of your servers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How does Rails, handle multiple requests from different users without colliding? What is the logic?
E.g., user1 logged in and browses the site. At same time, user2, user3, ... log in and browse. How does rails manage this situation without any data conflict between users?
One thing to bear in mind here is that even though users are using the site simultaneously in their browsers, the server may still only be handling a single request at a time. Request processing may take less than a second, so requests can be queued up to be processed without causing significant delays for the users. Each response starts from a blank slate, taking only information from the request and using it to look up data from the database. it does not carry anything over from one request to the next. This is called the "stateless" paradigm.
If the load increases, more rails servers can be added. Because each response starts from scratch anyway, adding more servers doesn't create any problems to do with "sharing of information", since all information is either sent in the request or loaded from the database. It just means that more requests can be handled per second.
When their is a feeling of "continuity" for the user, for example staying logged into a website, this is done via cookies, which are stored on their machine and sent through as part of the request. The server can read this cookie information from the request and, for example, NOT redirect someone to the login page as the cookie is telling them they have logged in already as user 123 or whatever.
In case your question is about how Rails differ users the answer will be that is uses cookies to store session. You can read more about it here.
Also data does not conflict since you get fresh instance of controller for each request
RailsGuides:
When your application receives a request, the routing will determine
which controller and action to run, then Rails creates an instance of
that controller and runs the method with the same name as the action.
That is guaranteed not by Rails but by the database that the webservice uses. The property you mentioned is called isolation. This is among several properties that a practical database has to satisfy, known as ACID.
This is achieved using a "session": a bunch of data specific to the given client, available server-side.
There are plenty of ways for a server to store a session, typically Rails uses a cookie: a small (typically around 4 kB) dataset that is stored on user's browser and sent with every request. For that reason you don't want to store too much in there. However, you usually don't need much, you only need just enough to identify the user and still make it hard to impersonate him.
Because of that, Rails stores the session itself in the cookie (as this guide says). It's simple and requires no setup. Some think that cookie store is unreliable and use persistence mechanisms instead: databases, key-value stores and the like.
Typically the workflow is as follows:
A session id is stored in a cookie when the server decides to initialize a session
A server receives a request from the user, fetches session by its id
If the session says that it represents user X, Rails acts as if it's actually him
Since different users send different session ids, Rails treats them as different ones and outputs data relevant to a detected one: on a per-request basis.
Before you ask: yes, it is possible to steal the other person's session id and act in that person's name. It's called session hijacking and it's only one of all the possible security issues you might run into unless you're careful. That same page offers some more insight on how to prevent your users from suffering.
As additional case You could use something like a "puma" multithread server...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have simple and straightforward rails app. And it requires to reload a page each time when I want to send an update or check new info available on the server.
I want to do two things:
Send data to my rails app without reloading a page
Show updates to the data which are done by other users (also without reloading a page)
Any recommendations on gems, framework, technologies to accomplish these two tasks?
For the first point, classical rails ajax call (with data-remote attribute) should do the job.
For the second, you should consider to use sockets, with services like pusher or faye. Take a look at this gem, which permits to sync partials : https://github.com/chrismccord/sync
If you can't use sockets, the classical fallback is a periodic ajax call on your backend.
I wonder whether any gem wraps all these details
Pusher
Asynchronicity
Asynchronicity is standard functionality for web applications - you'll have to open an asynchronous request to your server, which will allow you to then receive as much data as you want.
Asynchronous connections are best defined within the scope of HTTP. HTTP is stateless, meaning it treats your requests as unique every time (does not persist the data / connectivity). This means you can generally only send single requests, which will yield a response; nothing more.
Asynchronous requests occur in parallel to the "stateless" requests, and essentially allow you to receive updated responses from the server through means other than the standard HTTP protocol, typically through Javascript
--
There are 2 main ways to initiate "asynchronous" requests:
SSE's (Server Sent Events) - basically Ajax long polling
Websockets (opens a perpetual connection)
--
SSE's
Server sent events are an HTML5 technology, which basically allows you to "ping" a server via Javascript, and manage any updates which comes through:
A server-sent event is when a web page automatically gets updates from
a server.
This was also possible before, but the web page would have to ask if
any updates were available. With server-sent events, the updates come
automatically.
Setting up SSE's is simple:
#app/assets/javascripts/application.js
var source = new EventSource("/your/endpoint");
source.onmessage = function(event) {
alert(event.data)
};
Although native in every browser except IE, SSEs have a major drawback, which is they act very similarly to long-polling, which is super inefficient.
--
Websockets
The second thing you should consider is web sockets. These are by far recommended, but not having set them up so far, I don't have much specific information on how to use them.
I have used Pusher before though, which basically creates a third-party websocket for you to connect with. Websockets only connect once, and are consequently far more efficient than SSE's
I would recommend at least looking at Pusher - it sounds exactly like what you need
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to transfer Full amount of data(All subfolders) from one imap server to another ,
without any data loss.
I'm not sure what you mean by 'PHP Email Migration', as I don't see how PHP fits into the equation.
That said, there are a few MAP migration tools that can accomplish what you are describing. I'm the cofounder of a product called YippieMove that can do this, but there are also other open source alternatives, like imapsync. If you spend some time on Google, you'll find more options.
As vpetersson mentioned, it doesn't quite fit with the PHP part as migrating emails from one Imap to another Imap server can be done so easily with ANY imap client software as well, for examle, microsoft outlook, Mozilla Thunderbird, Apple Mail client and so on.
How can you do it is simple.
If the number of email accounts that you need to migrate are limited, here is the sample workflow for Mozilla Thunderbird and you can follow same process on any other client too.
Make a new Imap account, and name it Source. [make sure you select
IMAP as protocol and NOT POP]
Configure it in a way that it can connect to the current source Imap
server and depending on the volume of your emails in that account
and speed of your net, it may take a while to synch all emails.
Once synch is complete, create a new account for the target server,
name it let say Target and configure it as IMAP too.
Target account will be empty obviously, now simply copy all folders
from Source account to Target account.
Thunderbird will handle all the copy process and also it will upload
all mails to new server automatically(as its the default behavior
for IMAP account)
Once done you will have a complete clone of your email account on both servers.
Alternately if you have to do it in PHP, maybe because you have hundreds of email accounts and going with the above mentioned method is not practical, then follow the below steps.
You can use PHP_Imap library too but if you have a control over your server, I will recommend using PEAR's Net_IMAP library, which has some features missing in php statdard IMAP library.
Write a for loop for all your accounts and for each account
connect to the server
$imapServer = new Net_IMAP($emailHost, 143);
$loggedIn = $imapServer->login($loginName , $password);
if($loggedIn == true){
//code goes here
}
find all folders
$mBoxes = $imapServer->getMailboxes('', 0, true);
for each folder
$mBox = $imapServer->selectMailbox($folderName);
find all messages
$msgsList = $imapServer->getMessagesList();
get rawmessage.
foreach($msgUid){
$fullRawMail = $imapServer->getMessages($msgUid,false);
}
connect to target server
check if target server has the same folder as source, if it doesn't then create a folder
upload raw message to target server [specific folder]. you can use php's imap_append function for that.
imap_append($ImapStream, $folderName, $fullRawMail , "\Seen");