Implementing a online compiler [closed] - ruby-on-rails

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am doing a small project which is to host a site similar to ideone.com ie to host an online compiler which will compile and run the code which is given as an input.I am using ROR for the backend part.
What I have done is that the code which is inputted in the textbox is stored into the string and I am using system calls in ruby to create a file and I am storing the string in that file.Similary I am also storing the input for the code in another file.Again I am using system calls to compile and run the file and storing the output into the string and sending it to the front end part.
I have got two problems for the above implemented method
1) It will only work for a single user at a time.Any idea how to implement for multiple users and if yes what will be the limit of the number of users?
2) Anyone can put a malicious code and harm the system.I have to sandbox the environment such that it will run at an isolated environment. How can I do it?
Program running infinity loop is not a problem as I have put the limit on the execution time.I am using backtick to execute the shell script.I am implementing it for C, if I succeed to solve all the problems I would extend it to other languages.

For the sake of not letting people wipe out your hard drive, install spambots etc, you will need to run all code inside a virtual machine, to protect the host. This also solves the user problem since you can spin up a virtual machine for each user and spin it down after running the code. However, this might use a lot of resources on your server.
I'd be interested to find out what ideone.com does. I suspect that everything runs in the client's browser, which is obviously much safer since you can just use your server to save their code, but not actually run it. If it runs in their browser it is sandboxed anyway. Are you sure that you don't want to do this instead? I've never heard of anyone letting people upload code and then run it on the system server. Seems kind of insanely risky.

Related

Where ist the best place in the rails app stack to put an early "ok ok" answer? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
PRE
edit: -------------
remove the misleading: This a question about Hacker defense.
It is not a Question about defending potential Intruders, or security
I ask just for passing by known "friends", not to fill logs or alarms.
I try to clarify: I know that request blocking or redirecting is better handled with apache or alike.
The Scenario is, that I a have a small but mine Server for almost only private usage.
This - my - server I watch, as good as I can. I get alarms if something is happening, that should not.
I have the Module almost ready, the question is what to put here:
config.middleware.insert_before ?????, Friendly::SayYesOK
Additional I have a small but useful trick, that helps me identify kid hackers try to identify.
The Idea:
I want to answer already known bad HTTP(s) requests as early as possible with a "Yes, ok", like many know, ever sitting in a bar when an unwanted drunken tries to communicate.
I want to answer all "php", "joomla", "Wordpress", … attacks with an "ok" without filling my logs, without having this in any "page not found analyse"
But where to put such a module in Rails app stack - I think it should be before routes.rb is involved and after the request header is available.
I know how to, I don not not what level.
Ideas?
Don't put it in the Rails stack. Put a dedicated, lightweight HTTP server in front of your Rails app. nginx is a good choice. Configure it to reject certain patterns of URLs and forward all other requests to your Rails app.
This server can also be a load balancer, cache, request logging, rate limiting, a/b testing, and can serve shared static files more efficiently than a Rails app. It is a very important part of a stable web stack.
For example, when you have a new release you can spin up your new release on new instances, test that they work (they effectively become your staging servers), change your world-facing server to redirect requests to the new instances, and then shut down the old instances. This allows upgrades with little or no downtime.

what are the real world use cases of docker containers? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I understand docker containers features like app isolation, Server Consolidation, ... but I am wondering what are the real world use cases for Containers. How are you using containers in Production environment? Are they being used to host web services and web sites? Can you please give me some examples?
Thanks
First decoupling and distributing applications into microservices.
So I'll give you an example I work with where you have the old method of deploying an application lets say a rails application that's a monolith application, i.e with the db, web app, a background task scheduler like sidekiq(in the case of python RabbitMQ), and redis to hold temporal (key, value) data. All this services running on a single server would mean this application has a single point of failure at any given moment. If something were to fail means that your whole application is offline while you rush off to trace what failed. If you had taken your application split it into small pieces known as a micro-service where each part of your application is packed into a container with the libraries it needs then deployed independently and dynamically to form part your overall application. Decoupling your application into these manageable components and reducing the probability of your application ever failing entirely. Furthermore, decoupling allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developer’s personal laptop.
Dev and Ops separation of concerns
Take the part where most people build a release during deployment and may sometimes fail during deployment. With a container you get to build your image before release time which and you can run an environment similar to the one in production with no guesswork and this makes it easy for you to roll out more frequently without concern of failure/ or rather reduced. it also increases the consistency to which release/build your application.
Containerization provides a clean separation of concerns, as
developers focus on their application logic and dependencies, while IT
operations teams can focus on deployment and management without
bothering with application details such as specific software versions
and configurations specific to the app.
Just have a read on these two links and it should clear up some stuff on your end.
Containers 101 google and this
Kubernetes

Ruby Concurrency in cron job needed [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am developing a system in which the API should handle simultaneous, continuous by rails 4.0
In system, each user has 3 scripts to be run in background. The scripts grab the user's information from DB to call API repeatedly and process transaction. Currently I am using cronjob (whenever gem) to run scripts in the background for each individual user
So my problem is when the system has 1,000 people, I need to run 3000 cronjobs.
I think this system will have problems. Can anyone help me solve this problem?
At this point you have a system that performs some tasks periodically, and the amount of work your system has to handle (let's say, per hour) is less than the amount of work it could handle.
However, the amount of work increases with the number of users in your system so, as you have already guessed, there will be a moment when the situation will be critical. Your system will not be able to handle all the tasks it has to do.
One way to solve this problem is adding more machines to your system, that is, if you are currently using a single machine to run all your tasks, consider adding another one and split the job. You can split the job between the machines in a number of ways, but I would use a consumer-producer approach.
You will need to use a queue manager where your producer periodically sends a batch of tasks to be done (you can still use whenever gem for that) and a number of consumers (1 makes no sense, 2 would be OK by now but you could increase this number) get the tasks done one by one until there is none left.
The manager I like the most is Sidekiq but you can find some others that might match your needs better.

Importing data that may take 10-15 minutes to process, what are my options in Rails? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a Rails application that displays thousands of products.
The products are loaded from product feeds, so the source may be a large XML file or web service API calls.
I want to be able to re-use my models in my existing rails application in my import process.
What are my options in importing data into my Rails application?
I could use sidekiq to fire off rake tasks, but not sure if sidekiq is suitable for tasks that take 10+ minutes to run? Most use cases that I have seen is for sending of emails and other similiar light tasks
I could create maybe a stand-alone ruby script, but not sure how I could re-use my Rails models if I go this route.
Update
My total product could is around 30-50K items.
Sidekiq would be a great option for this as others have mentioned. 10+ minutes isn't unreasonable as long as you understand that if you restart your sidekiq process mid run that job will be stopped as well.
The concern I have is if you are importing 50K items and you have a failure near the beginning you'll never get to the last ones. I would suggest looking at your import routine and seeing if you can break it up into smaller components. Something like this:
Start sidekiq import job.
First thing job does is reschedule itself N hours later.
Fetch data from API/XML.
For each record in that result schedule a "import this specific data" job with the data as an argument.
Done.
The key is the second to last step. By doing it this way your primary job has a much better chance of succeeding as all it is doing is reading API/XML and scheduling 50K more jobs. Each of those can run individually and if a single one fails it won't affect the others.
The other thing to remember is that unless you configure it not to Sidekiq will rerun failed jobs. So make sure that "import specific data" job can be run multiple times and still do the right thing.
I have a very similar setup that has worked well for me for two years.

Auto-update a Windows Service [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am about to develop a program which will be installed and run as a Windows Service on the back-end servers (with internet access) of several clients. I do not have physical access to the servers. What I am working on is a scheme for reliably rolling out updates to the program.
I have spend quite a lot of time looking for best practices for Windows Service auto-updates, but have found very little useful information. Most techniques, such as ClickOnce, seem to cater primarily for Windows Forms-based applications.
My current plan is:
When updates are available, have the running service download, install, and start an update service.
The update service will download the new version of the main service and install it side-by-side of the current, still active version.
The update service next stops the old version and then starts the new version of the service.
The new version disables the update service during startup.
Some important concerns for me are:
Being able to handle updates both to the main service and the update service
Being able to handle interruptions such as power failure or a connection failure at any point during the update process
What am wondering is if this a common way to solve this problem and/or if I am missing anything crucial. Is it too complicated, or perhaps far too simplistic? How would you have done it (or, even better, how have you done this successfully)?
Thanks!
I was faced with the same problem last year and I wrestled with a lot of the same questions. A few bits of advice:
Since either service may be updated,
both services will function as
updaters of the other. Service A will
update Service B and vice versa. For
this reason, I suggest simply running
both services at all times. Unless
you are worried about really loading
down your server with
does-update-exist calls, the
enable/disable service management
isn't worth the overhead.
Like services can't be installed on a
single machine. In other words, you
can't install new and old versions of
the service side-by-side if they have
the same name. Unless you want to
complicate your update process, I
suggest you uninstall the old version
and then install the new version.
For example, Service A would download
Service B Installer, uninstall
Service B, install new version of
Service B and then run. Service B
would do the same for Service A.
Since each service is managing the
other, they should not only check for
available updates but they should
verify each other's health. For
example, Service A would check to see
if Service B exists and if it is
running. If the health check fails,
a list of steps to resolve the issue
and get the service running would be
completed by Service A. Executing the
health check and recovery operations
will cover you now matter what issue
arises with the update, initial
install or general operations.
Do ample logging on both the client
and the server. You'll want to track
what actions were taken and when.
For example, Service A might when it
is checking for updates, when it is
executing the health check and
associated actions. On the service
(assuming you are calling into a web
service looking for updates) track
the calls made by each Service. If
your Services aren't getting updated
at least you'll have a trail of
breadcrumbs (or lack of breadcrumbs)
pointing you to the problem.
There are literally bunches of potential gotchas with a solution of this sort: services not running at startup, UAC getting in the way, not being able to install and uninstall the services with the same user, ensuring user installing the service has ample permissions, connectivity loss, getting the .NET Framework installed on the client machine, handling reboot after install if necessary, etc.
Best of luck. It's a fun problem to solve, but it doesn't go without it's frustration -- especially since, as you said, there isn't a lot of documented information available.

Resources