I am new to Ruby on Rails. I am using PostgreSQL database with Ruby and Rails 3.2.13. We already created 200K rows of records in PostgreSQL database. I need to send the same 200K records to another standalone windows application.I created a ROR REST API for this purpose. Currently the REST API takes long time to process the data and gets time out after 3 mins.
I am sending 1000 records at a time so that the API will send 1 - 1000 then 1001 - 2000 and so on. This is avoiding the time out. Is this a good approach in handling bulk data.
Do ROR have any build in function to handle this type of operation. Please help me.
Thanks
In short, no I don't think this is a good approach for transferring bulk data.
Putting the security concerns aside (which is reason itself not to), you'll need to handle data/schema consistency, connection reliability, datatype integrity, parsing and sanitizing strings, serialization/deserialization, etc... Sounds like a huge headache to me.
Bulk data transfer between databases isn't a responsibility/concern of Rails. I'd stick to doing this entirely on the backend and set up a new database as a replication slave of the master.
Related
the background is that I have a Ruby on Rails application that connects to a Postgresql and Mongodb database and I want to have a column in Postgres that keeps tracking the number of records in Mongodb.
I cannot use a counter cache because all of the records that persist in Mongodb come from a microservice API written in Phoenix. I separated it as a microservice to reduce the workload on the Ruby on Rails side.
So, currently, I am using a cron job executed every 10 minutes, but it takes all of the Mongodb resources and may not be the best solution in the long run.
So, if you have any suggestions, please feel free to share it with me! Thank you so much :D
If you are only updating every 10 minutes the value is inherently imprecise already, so estimatedDocumentCount should suffice and be significantly lighter on the resources.
I have a very high-traffic Rails app. We use an older version of PostgreSQL as the backend database which we need to upgrade. We cannot use either the data-directory copy method because the formats of data files have changed too much between our existing releases and the current PostgreSQL release (10.x at the time of writing). We also cannot use the dump-restore processes for migration because we would either incur downtime of several hours or lose important customer data. Replication would not be possible as the two DB versions are incompatible for that.
The strategy so far is to have two databases and copy all the data (and functions) from existing to a new installation. However, while the copy is happening, we need data arriving at the backend to reach both servers so that once the data migration is complete, the switch becomes a matter of redeploying the code.
I have figured out the other parts of the puzzle but am unable to determine how to send all writes happening on the Rails app to both DB servers.
I am not bothered if both installations get queried for displaying data to the user (I can discard the data coming out of the new installation); so, if it is possible on driver level, or adding a line somewhere in the ActiveRecord, I am fine with it.
PS: Rails version is 4.1 and the company is not planning to upgrade that.
you can have multiple database by adding an env for the database.yml file. After that you can have a seperate class Like ActiveRecordBase and connect that to the new env.
have a look at this post
However, as I can see, that will not solve your problem. Redirecting new data to the new DB while copying from the old one can lead to data inconsistencies.
For and example, ID of a record can be changed due to two data source feeds.
If you are upgrading the DB, I would recommend define a schedule downtime and let your users know in advance. I would say, having a small downtime is far better than fixing inconstant data down the line.
When you have a downtime,
Let the customers know well in advance
Keep the downtime minimal
Have a backup procedure, in an even the new site takes longer than you think, rollback to the old site.
I'm developing a polling application that will deal with an average of 1000-2000 votes per second coming from different users. In other words, it'll receive 1k to 2k requests per second with each request making a DB insert into the table that stores the voting data.
I'm using RoR 4 with MySQL and planning to push it to Heroku or AWS.
What performance issues related to database and the application itself should I be aware of?
How can I address this amount of inserts per second into the database?
EDIT
I was thinking in not inserting into the DB for each request, but instead writing to a memory stream the insert data. So I would have a scheduled job running every second that would read from this memory stream and generate a bulk insert, avoiding each insert to be made atomically. But i cannot think in a nice way to implement this.
While you can certainly do what you need to do in AWS, that high level of I/O will probably cost you. RDS can support up to 30,000 IOPS; you can also use multiple EBS volumes in different configurations to support high IO if you want to run the database yourself.
Depending on your planned usage patterns, I would probably look at pushing into an in-memory data store, something like memcached or redis, and then processing the requests from there. You could also look at DynamoDB, which might work depending on how your data is structured.
Are you going to have that level of sustained throughput consistently, or will it be in bursts? Do you absolutely have to preserve every single vote, or do you just need summary data? How much will you need to scale - i.e. will you ever get to 20,000 votes per second? 200,000?
These type of questions will help determine the proper architecture.
I have a Rails app hosted on Heroku. I have to do long backend calculations and queries against a mySQL database.
My understanding is that using DelayedJob or Whenever gems to invoke backend processes will still have impact on Rails (front-end) server performance. Therefore, I would like to set up two different Rails servers.
The first server is for front-end (responding to users' requests) as in a regular Rails app.
The second server (also a Rails server) is for back-end queries and calculation only. It will only read from mySQL, do calculation then write results into anothers Redis server.
My sense is that not lot of Rails developers do this. They prefer running background jobs on a Rails server and adding more workers as needed. Is my sever structure a good design, or is it an overkill? Is there any pitfall I should be aware of?
Thank you.
I don't see any reason why a background job like DelayedJob would cause any more overhead on your main application than another server would. The DelayedJob runs in it's own process so the dyno's for your main app aren't affected. The only impact could be on the database queries but that will be the same whether from a background job or another app altogether that is accessing the same database.
I would recommend using DelayedJob and workers on your primary app. It keeps things simple and shouldn't be any worse performance wise.
One other thing to consider if you are really worried about performance is to have a database "follower", this is effectively a second database that keeps itself up to date with your primary database but can only be used for reads (not writes). There may be better documentation about it, but you can get the idea here https://devcenter.heroku.com/articles/fast-database-changeovers#create_a_follower. You could then have these lengthy background jobs read data from here leaving your main database completely unaffected.
I would like to hear about from the community a nice pattern to the following problem.
I had a "do-everything" server, which were webserver, mysql, crawlers server. Since two or three weeks, using monitoring tools, i saw that always when my crawlers were running, my load average was going over 5 (a 4 core server, would be ok to have until 4.00 as load). So, i've got another server and i want to move my crawlers to there. My question is. As soon as i have the data crawled in my crawler server, i have to insert in my database. And i wouldn't like to open a remote connection and insert it in the database, since i prefer to use the Rails framework, btw i'm using rails, to keep easier to create all relationships, and etc.
problem to be solved:
server, has the crawled data (bunch of csv files) and i want to move it to a remote server and insert it in my db using rails.
restriction: I don't want to run mysql (slave + master) since it would require a deeper analysis to know where happens more write operations.
Ideas:
move the csvs from crawlers to remove server using (ssh, rsync) and importing it during the day
write an API in the crawler server that my remote server can pull (many times at day) and import the data
any other idea or good patterns around this theme?
With a slight variation to the second pattern you have noted you could have a API in your web-app-server/db-server. Which the crawler will use to report in his data. He could do this in batches, real-time or only in a specific window of time (day/night time...etc).
This pattern will let the crawler decide when to report in the data. rather than having the web-app do the 'polling' for data.