Solana Anchor with Ruby On Rails - ruby-on-rails

Have the next problem, I use the Anchor to build and deploy the program on Solana blockchain and now I met with the problem of how to interact with this program using ruby in rails, as I understand it, first we have to use binary encoding to send a transaction, and if we want to get data, use the same decoding, maybe someone has some examples of how the encoding should be done correctly? as I understand it, we encode the ID of the program, the last hash of the block, the name of the function and the data to create a transaction. Maybe someone has done this using ruby and has an example? sincerely grateful!

It's going to be really annoying to do in Ruby. You have to use the Solana API/RPC to build and send the transaction, including encoding the data for accounts using Borsh (or whatever, but most use Borsh) etc.
I'd use JS/TS to do all of the interacting with your program unless you have a very good reason to do otherwise and a lot of time and energy on your hands. TS already has this stuff in web3.js / Anchor.

Related

What's the fastest way to read a large file in Ruby?

I've seen answers to this question but I couldn't figure out which of the answers would perform the fastest. These are the answers I've seen- which is best?
Read one line at a time using each or each_line
Read one line at a time using gets
Save it all into an array of lines using readlines and then use each
Use grep (not sure what exactly to do with grep...)
Use sed (not sure what exactly to do with sed...)
Something else?
Also, would it be better to just use another language or should Ruby be fine?
EDIT:
More details: Each line contains something like "id1 attr1_1 attr2_1 id2 attr1_2 attr2_2... idn attr1_n attr2_n" (n is very big) and I need to insert those into a database. For that example line, I would need to insert n rows into the database.
Ruby will likely be using the same or very similar low-level code (written in C) to do the actual reading from disk for the first three options, so they should perform similarly. Given that, you should choose whichever is most convenient for you; the ability to do that is what makes languages like Ruby so useful! You will be reading a lot of data from disk, so I would suggest using each_line and processing each line as you read it.
I would not recommend bringing grep, sed, or any other such external utilities into the picture unless you have a very good reason, as they will make your code less portable and expose you to failures that may be difficult to diagnose.
If you're using Ruby then there's no need to worry about performance. The language is such that it suits an iterative approach to reading a file, line by line, and works very nicely. So long as you're using the language the way it's designed you can let the interpreter people worry about performance. Job done.
If one particular readLargeFileFast method is needed then it should be because it's really hindering the program somehow. Now, you write a C program to do it and popen it as a separate process within your ruby code. You could call it read_large.c and (perhaps) use command line arguments to tell it how to behave.
This is championing the idea that a scripting language is used for a fast development rather than a fast run time. As such a developer can be very productive by swiftly 'prototyping' a program in something like Ruby and only later rewriting the components warrant some low level code. Often, however, once it's working in script, it's not necessary to do anything else at all.
The Ruby Docs describe launching a separate process and treating it as a file. It's easy-peasy! A good start is The Art of Linux Programming's introductory paragraph on program modularity. This book also makes a great example of using linux's standard stream editor, called sed, which you could probably use from Ruby right now.
If you need to parse or edit a lot of text then many interpreters or editors have been written around sed's functionality. Further, it may save you a lot of effort writing something super efficient if you don't know C. Good is the Introduction to SED by Bruce Barnett.

Is it sensible to run Backbone on both server (Node.js and Rails) and client for complex validations?

This is verbose, I apologise if it’s not in accordance with local custom.
I’m writing a web replacement for a Windows application used to move firefighters around between fire stations to fill skill requirements, enter sick leave, withdraw firetrucks from service, and so on. Rails was the desired back-end, but I quickly realised I needed a client-side framework and chose Backbone.js.
Only one user will be on it at a time, so I don’t have to consider keeping clients in sync.
I’ve implemented most of the application and it’s working well. I’ve been avoiding facing a significant shortcoming, though: server-side validations. I have various client-side procedures ensuring that invalid updates can’t be made through the interface; for instance, the user can’t move someone who isn’t working today to another station. But nothing is stopping a malicious user from creating a record outside of the UI and saving it to the server, hence the need for server-side validation.
The client loads receives all of today’s relevant records and processes them. When a new record is created, it’s sent to the server, and processed on the client if it saved successfully.
The process of determining who is working today is complex: someone could be scheduled to work, but have gone on holidays, but then been called in, but then been sent home sick. Untangling all this on the server (on each load?!) in Ruby/Rails seems an unfortunate duplication of business logic. It would also have significant overhead in a specific case involving calculating who is to be temporarily promoted to a higher rank based on station shortages and union rules, it could mean reloading and processing almost all today’s data, over and over as each promotion is performed.
So, I thought, I have all this Backbone infrastructure that’s building an object model and constraining what models can be created, why not also use it on the server side?
Here is my uncertainty:
Should I abandon Rails and just use Node.js or some other way of running Backbone on the server?
Or can I run Node.js alongside Rails? When a user opens the application, I could feed the same data to the browser and Node, and Rails would check with the server-side Backbone to make sure the proposed new object was valid before saving it and returning it to the browser.
One factory is how deeply Rails is entrenched in this application. There isn’t that much server-side Ruby for creation/deletion of changes, but I made a sort of adaptation layer for loading the data to compensate for the legacy database model. Rails is mostly just serving JSON, and CSS, Javascript, and template assets. I do have a lot of Cucumber features, but maybe only the data-creation ones would need to be updated?
Whew! So, I’m looking for reassurance: is it reasonable, like suggested in this answer, to be running both Rails and Node on the server, with some kind of inter-process communication? Or has Rails’s usefulness shrunk so much (it is pretty much a single-page application like mentioned in that answer) that I should just get rid of it entirely and suffer some rewriting to a Node environment?
Thanks for reading.
It doesn't sound like you're worried a lot about concurrency as much as being able to do push data, which both platforms are completely capable of performing. If you have a big investment in Ruby code now, and no one is complaining about its use then what might be the concern? If you just want to use Node for push and singularity of using javascript through the stack then it might be worth it to move code over to it. From your comments, I really feel it is more about what is interesting to you, but you'll have to support the language(s) of choice. If you're the only one on the team then its pretty easy to just slip into a refactor to node just because it is interesting. Just goes back to who's complaining more, you or the customer. So to summarize: Node lets you move toward a single language in your code base, but you have to worry about what pitfalls javascript on the server has currently. Ruby on Rails is nice because you have all the ability to quickly generate features and proto them out.

Advice on communication between rails front-end and scala backend

We are developing the following setup:
A rails webapp allows users to make requests for tasks that are passed on to a Scala backend to complete, which can take up to 10 seconds or more. While this is happening, the page the user used to make the request periodically polls rails using AJAX to see if the task is complete, and if so returns the result.
From the user's point of view the request is synchronous, except their browser doesn't freeze and they get a nice spinny thing.
The input data needed by the backend is large and has a complex structure, as does the output. My initial plan was to simply have the two apps share the same DB (which will be MongoDB), so the rails app could simply write an id to a 'jobs' table which would be picked up by the scala backend running as a daemon, but the more I think about it the more I worry that there may be lots of potential gotchas in this approach.
The two things that worry me the most is the duplication of model code, in two different languages, which would need to be kept in sync, and the added complexity of dealing with this at deployment. What other possible problems should I take into account when assessing this approach?
Some other possibilities I'm looking at are 1) making the Scala backend a RESTful service or 2) implementing a message queue. However, I'm not fully convinced about either option as they will both require more development work and it seems to me that in both cases the model code is effectively duplicated anyway, either as part of the RESTful API or as a message for the message queue - am I wrong about this? If one of these options is better, what is a good way to approach it?
I have used several times resque for similar problems and Ive been always very happy with it, it gives you all you need to implement a job queue and is backed on redis. I would strongly suggest you to take a look at it
Don't take me wrong, but, for this project what does Rails provide that Scala does not?
Put another way, can you do without Scala entirely and do it all in Rails? Obviously yes, but you're developing the back end in Scala for a reason, right? Now, what does Scala provide that Rails does not?
I'm not saying you should do one or the other, but maintaining duplicate code is asking for trouble/complication.
If you have a mountain of Ruby code already invested in the project, then you're committed to Rails, barring a complete Twitter-esque overhaul (doesn't sound like that's an option nor desired).
Regardless, not sure how you are going to get around the model duplication. Hitting a Mongo backend does buy you BSON result sets, which you could incorporate with Spray, their excellent implementation of Akka, out-of-the-box REST, and JSON utilities.
Tough problem, sounds like you're caught between 2 paradigms!

Find unused code in a Rails app

How do I find what code is and isn't being run in production ?
The app is well-tested, but there's a lot of tests that test unused code. Hence they get coverage when running tests... I'd like to refactor and clean up this mess, it keeps wasting my time.
I have a lot of background jobs, this is why I'd like the production env to guide me. Running at heroku I can spin up dynos to compensate any performance impacts from the profiler.
Related question How can I find unused methods in a Ruby app? not helpful.
Bonus: metrics to show how often a line of code is run. Don't know why I want it, but I do! :)
Under normal circumstances the approach would be to use your test data for code coverage, but as you say you have parts of your code that are tested but are not used on the production app, you could do something slightly different.
Just for clarity first: Don't trust automatic tools. They will only show you results for things you actively test, nothing more.
With the disclaimer behind us, I propose you use a code coverage tool (like rcov or simplecov for Ruby 1.9) on your production app and measure the code paths that are actually used by your users. While these tools were originally designed for measuring test coverage, you could also use them for production coverage
Under the assumption that during the test time-frame all relevant code paths are visited, you can remove the rest. Unfortunately, this assumption will most probably not fully hold. So you will still have to apply your knowledge of the app and its inner workings when removing parts. This is even more important when removing declarative parts (like model references) as those are often not directly run but only used for configuring other parts of the system.
Another approach which could be combined with the above is to try to refactor your app into distinguished features that you can turn on and off. Then you can turn features that are suspected to be unused off and check if nobody complains :)
And as a final note: you won't find a magic tool to do your full analysis. That's because no tool can know whether a certain piece of code is used by actual users or not. The only thing that tools can do is create (more or less) static reachability graphs, telling you if your code is somehow called from a certain point. With a dynamic language like Ruby even this is rather hard to achieve, as static analysis doesn't bring much insight in the face of meta-programming or dynamic calls that are heavily used in a rails context. So some tools actually run your code or try to get insight from test coverage. But there is definitely no magic spell.
So given the high internal (mostly hidden) complexity of a rails application, you will not get around to do most of the analysis by hand. The best advice would probably be to try to modularize your app and turn off certain modules to test f they are not used. This can be supported by proper integration tests.
Checkout the coverband gem, it does what you exactly what are you searching.
Maybe you can try to use rails_best_practices to check unused methods and class.
Here it is in the github: https://github.com/railsbp/rails_best_practices .
Put 'gem "rails_best_practices" ' in your Gemfile and then run rails_best_practices . to generate configuration file
I had the same problem and after exploring some alternatives I realized that I have all the info available out of the box - log files. Our log format is as follows
Dec 18 03:10:41 ip-xx-xx-xx-xx appname-p[7776]: Processing by MyController#show as HTML
So I created a simple script to parse this info
zfgrep Processing production.log*.gz |awk '{print $8}' > ~/tmp/action
sort ~/tmp/action | uniq -c |sort -g -r > ~/tmp/histogram
Which produced results of how often an given controller#action was accessed.
4394886 MyController#index
3237203 MyController#show
1644765 MyController#edit
Next step is to compare it to the list of all controller#action pair in the app (using rake routes output or can do the same script for testing suite)
You got already the idea to mark suspicious methods as private (what will maybe break your application).
A small variation I did in the past: Add a small piece code to all suspicious methods to log it. In my case it was a user popup "You called a obsolete function - if you really need please contact the IT".
After one year we had a good overview what was really used (it was a business application and there where functions needed only once a year).
In your case you should only log the usage. Everything what is not logged after a reasonable period is unused.
I'm not very familiar with Ruby and RoR, but what I'd suggest some crazy guess:
add :after_filter method wich logs name of previous called method(grab it from call stack) to file
deploy this to production
wait for a while
remove all methods that are not in log.
p.s. probably solution with Alt+F7 in NetBeans or RubyMine is much better :)
Metaprogramming
Object#method_missing
override Object#method_missing. Inside, log the calling Class and method, asynchronously, to a data store. Then manually call the original method with the proper arguments, based on the arguments passed to method_missing.
Object tree
Then compare the data in the data store to the contents of the application's object tree.
disclaimer: This will surely require significant performance and resource consideration. Also, it will take a little tinkering to get that to work, but theoretically it should work. I'll leave it as an exercise to the original poster to implement it. ;)
Have you tried creating a test suite using something like sahi you could then record all your user journies using this and tie those tests to rcov or something similar.
You do have to ensure you have all user journies but after that you can look at what rcov spits out and at least start to prune out stuff that is obviously never covered.
This isn't a very proactive approach, but I've often used results gathered from New Relic to see if something I suspected as being unused had been called in production anytime in the past month or so. The apps I've used this on have been pretty small though, and its decently expensive for larger applications.
I've never used it myself, but this post about the laser gem seems to talk about solving your exact problem.
mark suspicious methods as private. If that does not break the code, check if the methods are used inside the class. then you can delete things
It is not the perfect solution, but for example in NetBeans you can find usages of the methods by right click on them (or press Alt+F7).
So if method is unused, you will see it.

Is there any way to intercept print jobs on a local Windows XP machine?

Preferably using a scripting language like Perl or Python, but if I have to go the compiled route then so be it.
Essentially what I want to do is make an addition to my company's mail merge system. Right now, the software we use has a rather limited selection of mail merge fields that it exports, but if we could somehow incorporate results of database queries into the letters, we could accomplish much more (and unfortunately Word doesn't provide enough flexibility with database queries to accomplish this). The system we use automatically sends its letters to the default printer (which is a peer-to-peer printer, no print server). I would like to create a program that could act as a middleman for this. Ideally, it would detect when a print job is fired, capture the sent document, open it, insert extra data from its own queries, then send the new version to the printer.
I have two questions
Is this even possible, if so, where do I start?Is this feasible for one person to complete in a reasonable time frame? Keep in mind I'm not a programmer by trade, I'm the sysadmin type of person =P
Honestly this is an incredibly hard road to go down. Maybe try creating a virtual printer that dealt with the data and forwarded it onto the real printer. Will see if I can find anything for you.
If you are using Word, I think you would find it vastly easier to implement your enhanced mail merge system in Visual Basic. I suspect it would be vastly, vastly harder to intercept the jobs at that level. If you prefer Perl or Python to VB, you could even write .py/.pl scripts to run the queries and generate .vbs scripts. You could also use OpenOffice, which can be scripted with Python.

Resources