The page titled Using Preload Scripts in the Electron documentation says,
IPC SECURITY
Notice how we wrap the ipcRenderer.invoke('ping') call in a helper function rather than expose the ipcRenderer module directly via context bridge. You never want to directly expose the entire ipcRenderer module via preload. This would give your renderer the ability to send arbitrary IPC messages to the main process, which becomes a powerful attack vector for malicious code.
What is the threat being described here?
Because when I write an Electron application then it's me who writes both, the main process and the renderer -- why then must the renderer be less privileged than the main process?
What foreign (potentially malicious) code, if any, is running in the renderer and not in the process?
The renderer process is less privileged to reduce RCE, XSS, or other attacks that might cause damage to your underlying system that's running the Electron application. This is most-applicable to apps that you distribute to others, where you can't trust how the user will interact with your application. Outside of malicious actors interacting with your app, I can't say the risk is 0 (solely because there might be something that could happen), but it's probably very small.
Besides focusing on the security benefits of using IPC and the renderer/main process, in my opinion it's a good separation of concerns and sets up your app to be in a place to distribute it if you decide to do so in the future.
I wrote about a history of Electron as well as how IPC works and how the framework changed to now emphasize/promote IPC (in case that's something you want to read).
reZach's answer links to an article:
The ultimate Electron guide
That includes this paragraph:
You see, sticking Node in the renderer process opens up our applications to RCE (remote code execution) attacks. Through unique ways, a motivated individual can open up the developer tools on your Electron app (remember, the window is simply a Chromium browser), find the reference to fs, and boom - there goes all of your files.
The linked article is Joplin ElectronJS based Client: from XSS to RCE by Jaroslav Lobačevski begins:
One day I was reading the master’s thesis “Analysis of Electron-Based Applications…” by Silvia Väli. She basically applied the auditing checklist by Luca Carettoni, searched GitHub projects for some interesting keywords and then manually reviewed and tested for vulnerabilities. The hypothesis of her thesis was that since Electron framework has many insecure defaults there must be significant number of vulnerable applications. It appeared to be true and she scored a bunch of nice CVEs.
Personally I find vulnerabilities in Electron based applications interesting because of the lack of process isolation, that can easily lead from a “simple” XSS to Code Execution on the running machine.
That auditing checklist is Electron Security Checklist
A guide for developers and auditors from 2017 on blackhat.com.
Its first two checklist items include,
Disable nodeIntegration for untrusted origins
Use sandbox for untrusted origins
In summary "untrusted origins" seems to confirm reZach's comment:
The threat is most (likely only) applicable when the renderer is loading scripts from 3rd party websites or is used by outside users, that is where RCE, XSS or other attacks might happen.
Well, you may have an XSS vulnerability in the renderer, like if you don't escape user input or data from an external source.
For example, if, for any reason, you have an eval call starting from the text of an HTML input that is not checked in any way, then what explained in the paragraph you shared would avoid unexpected IPC messages to the main process that may compromise your application security.
Related
I have a related post - Assertion failure in DBAccess.pas but thought this was worth asking separately.
We are licensed for the full source-code release of DevArt ODAC but have been experiencing tremendous difficulties performing an upgrade. In the course of investigating this I have noticed that there is no .pas file for OraNet.dcu.
This is making it difficult to establish what the cause of our difficulties is as we cannot fully debug the code.
Also - what is this unit? From its name and the directives in the code it would be reasonable to suppose it is a .NET required unit - not something we are interested in.
There is the Direct DB connection mode implemented in the OraNet.dcu module, and we don't distribute the source code of this module, this limitation is specified at our website (the reference at the bottom of the page). If you don't use the Direct mode, and work via the Oracle client (the OCI mode), you can specify DEFINE NONET in your project settings, in this case, the Direct mode will be unavailable, and this module won't be used.
The client usage (even Oracle Instant Client) really allows using more features, than our Direct mode, but the Direct mode even surpasses OCI by performance in some cases. Besides, the Direct mode significantly simplifies application deployment and decreases the application size on disk due to the fact that there is no need to supply and deploy additional libraries, and set additional registry parameters and environment variables. The Direct mode also allows application deployment to systems, for which there are no native Oracle clients, for example, iOS. Choosing of the way of working with the DB (Direct or OCI) depends on developer and tasks resolved by each particular application. As it was mentioned above, if the Direct mode is not used, additional module plugging can be disabled by setting DEFINE NONET
As far as I know Erlang provides advanced features for error handling and isolation of processes.
I'm building a system that allow user to submit their code to be executed on the shared server environment and need to make it safe.
Requirements are:
limit CPU and Memory usage individually for each user-process.
forbid user-process to communicate with other processes (except some processes specially designed for such purpose).
forbid access to all sytem resources (shell, file system, ...).
terminate user-process in case of errors or high resource consumption.
Is it possible to to all this with Erlang and keep it performance efficient?
In general, Erlang doesn't provide means to sandbox code which a user can inject. You can try writing your own piece of protection code, but it is rather hard.
A better choice would probably be a language like "safe haskell":
http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/safe-haskell.html
which is specifically built to do this kind of thing.
The isolation provided by Erlang is not intended to protect against malicious modules being injected. In fact, there is no such protection in the distributed case either. As soon as two machines are connected, there is no limit to what you can do to the other machine.
There has been work done on Safe Erlang in the past and you can find several papers about it.
The ErlHive project addresses the problem in an interesting way.
The situation is simple. I've created a complex Delphi application which uses several different techniques. The main application is a WIN32 module but a few parts are developed as .NET assemblies. It also communicates with a web service or retrieves data from a specific website. It keeps most of it's user-data inside an MS Access database with some additional settings inside the Registry. In-memory, all data is converted inside an XML document, which is occasionally saved to disk as backup in case the system crashes. (Thus allowing the user to recover his data.) There's also some data in XML files for read-only purposes. The application also executes other applications and wants for those to finish. All in al, it's a pretty complex application.
We don't support Citrix with this application, although a few users do use this application on a Citrix server. (Basically, it allows those users to be more mobile.) But even though we keep telling them that we don't support Citrix, those customers are trying to push us to help them with some occasional problems that they tend to have.
The main problem seems to be an occasional random exception that seems to pop up on Citrix systems. Never at the same location and often it looks related to some memory problems. We've p[lenty of error reports already and there are just too many different errors. So I know solving all those will be complex.
So I would like to go a bit more generic and just want to know about the possible issues a Delphi (2007) can have when it's run on a Citrix system. Especially when this application is not designed to be Citrix-aware in any way. We don't want to support Citrix officially but it would be nice if we can help those customers. Not that they're going to pay us more, but still...
So does anyone know some common issues a Delphi application can have on a Citrix system?
Does anyone know about common issues with Citrix in general?
Is there some Silver Bullet or Golden Hammer solution somewhere for Citrix problems?
Btw. My knowledge about Citrix is limited to this Wikipedia entry and this website... And a bit I've Googled...
There were some issues in the past with Published Delphi Applications on Citrix having no icon in the taskbar. I think this was resolved by the MainFormOnTaskbar (available in D2007 and higher). Apart from that there's not much difference between Terminal Server and Citrix (from the Application's perspective), the most important things you need to account for are:
Users are NEVER administrator on a Terminal or Citrix Server, so they no rights in the Local Machine part of the registry, the C drive, Program Folder and so on.
It must be possible for multiple users on the same system to start your application concurrently.
Certain folders such as the Windows folder are redirected to prevent possible application issues, this is also means that API's like GetWindowsFolder do not return the real windows folder but the redirected one. Note that this behaviour can be disabled by setting a particular flag in the PE header (see delphi-and-terminal-server-aware).
Sometimes multiple servers are used in a farm which means your application can run on any of these servers, the user is redirected to the least busy server at login (load balancing). Thefore do not use any local database to store things.
If you use an external database or middleware or application server note that multiple users will connect with the same computername and ip address (certain Citrix versions can use Virtual IP addresses to address this).
Many of our customers use our Delphi applications on Citrix. Generally speaking, it works fine. We had printing problems with older versions of Delphi, but this was fixed in a more recent version of Delphi (certainly more recent than Delphi 2007). However, because you are now running under terminal services, there are certain things which will not work, with or without Citrix. For example, you cannot make a local connection to older versions of InterBase, which use a named pipe without the GLOBAL modifier. Using DoubleBuffered would also be a really bad idea. And so on. My suggestion is to look for advice concerning Win32 apps and Terminal Services, rather than looking for advice on Delphi and Citrix in particular. The one issue which is particular to Citrix that I'm aware of is that you can't count on having a C drive available. Hopefully you haven't hard-coded any drive letters into your code, but if you have you can get in trouble.
Generally speaking, your application needs to be compatible with MS Terminal Services in order to work with XenApp. My understanding is that .NET applications are Terminal Services-compatible, and so by extension should also work in a Citrix environment. Obviously, as you're suffering some problems, it's not quite that simple, however.
There's a testing and verification kit available from http://community.citrix.com/citrixready that you may find helpful. I would imagine the Test Kit and Virtual Lab tools will be of most use to you. The kit is free to use, but requires sign-up.
Security can be an issue. If sensitive folders are not "sandboxed" (See Remko's discussion about redirection), the user can break out of your app and run things that they shouldn't. You should probe your app to see what happens when they "shell out" of your app. Common attack points are CHM Help, any content that uses IE to display HTML, and File Open/Save dialogs.
ex: If you show .chm help, the user can right-click within a help topic, View Source. That typically opens Notepad. From there, they can navigate the directory structure. If they are not properly contained, they may be able to do some mischief.
ex: If they normally don't have a way to run Internet Explorer, and your app has a clickable URL in the about box or a "visit our web site" in the Help menu, voila! they have access to the web browser. If unrestrained, they can open a command shell by navigating to the windows directory.
What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.
By looking at our DB's error log, we found that there was a constant stream of almost successful SQL injection attacks. Some quick coding avoided that, but how could I have setup a monitor for both the DB and Web server (including POST requests) to check for this? By this I mean if there are off the shelf tools for script-kiddies, are there off the shelf tools that will alert you to their sudden random interest in your site?
Funnily enough, Scott Hanselman had a post on UrlScan today which is one thing you could do to help monitor and minimize potential threats. It's a pretty interesting read.
UrlScan does seem like a nice option for iis6 and 7; I also found: dotDefender for pay which also covers Apache or IIS 5-7, and I had found an SQL Injection sanitation ISAPI
It is also worth noting in light of a recent wide spread SQL Injection attempt that dissallowing your webapp's db user account from querying the system tables (in MS SQL Server it's sysobjects and syscolumns) is a good idea.
I think this thread warrants more free solutions for Apache and other web servers.
Unfortunately intrusion detection was not what I had in mind, so sgfree isn't exactly a web site attack monitor, unless I'm not understanding how it works.
If you could go back and modify your app code, I'd suggest getting log4j/log4net integrated into the application. From there you could write code that would check a form field or URL (say at the global.asax level for .NET apps) and make a log entry when malicious code is detected.
The nice thing about log4j/log4net is that you can configure an e-mail/pager/SMS type appender so as soon as the malicious attempt was caught, you would be notified.
I'm in the process of merging some log4net code into our CMS system we have and I'm looking to do just this in light of the influx of ASPRox attacks that have been coming our way.
Monitoring web and DB access logs should alert you to things like this, but if you want a more fully featured alert system I would suggest some kind of IDS/IPS. You'll need a spare machine though, and a switch that can do port mirroring.
If you have those then an IDS is a cheap way of monitoring your traffic for many intrusion attempts (there will be lots). Snort (www.snort.org) based IDSes are excellent, and there are some free fully packaged versions available. One I have used is StrataGuard (http://sgfree.stillsecure.com/), and it can be configured as an IDS (Intrusion Detection System) or as an IPS (Intrusion Prevention System). It's free to use if your traffic does not exceed 5Mbps.
If you do go with an IDS/IPS I'd advise you to let it run as a simple IDS for a month or so, before you allow it to prevent attacks.
This may be overkill, but if you have a spare machine lying around it can't hurt to have an IDS running passively.
You can set up your system to kick out some error message that then makes a JSON or http call to a system that will monitor, report (log) and send out any kind of alert such as SMS/email or a phone call.
Check out developer.alertcaster.com
Especially if you need to monitor multiple simultaneous events, which it sounds like you have going on, this might be a good fix.