The breakpad project will be replaced by the google crashpad project. How do I integrate the new crash reporter with my application on Mac?
First you'll need to setup depot_tools in order to build Crashpad.
Next you'll have to get a copy of the Crashpad source.
Build Crashpad with gn and ninja, where gn generates a build configuration, and ninja does the actual building. Full instructions on how to build Crashpad are available here.
For MacOS, you will need to link against libclient.a, libutil.a, libbase.a and all of the .o files in out/Default/obj/out/Default/gen/util/mach if you want to generate minidumps and upload them to a remote server. Additionally, you'll need to package crashpad_handler with your application and ensure that it is available at runtime.
Integrate Crashpad with your application by configuring the Crashpad handler and point it at a server that is capable of ingesting Crashpad crash reports.
#include "client/crashpad_client.h"
#include "client/crash_report_database.h"
#include "client/settings.h"
#if defined(OS_POSIX)
typedef std::string StringType;
#elif defined(OS_WIN)
typedef std::wstring StringType;
#endif
using namespace base;
using namespace crashpad;
using namespace std;
bool initializeCrashpad(void);
StringType getExecutableDir(void);
bool initializeCrashpad() {
// Get directory where the exe lives so we can pass a full path to handler, reportsDir and metricsDir
StringType exeDir = getExecutableDir();
// Ensure that handler is shipped with your application
FilePath handler(exeDir + "/path/to/crashpad_handler");
// Directory where reports will be saved. Important! Must be writable or crashpad_handler will crash.
FilePath reportsDir(exeDir + "/path/to/crashpad");
// Directory where metrics will be saved. Important! Must be writable or crashpad_handler will crash.
FilePath metricsDir(exeDir + "/path/to/crashpad");
// Configure url with BugSplat’s public fred database. Replace 'fred' with the name of your BugSplat database.
StringType url = "https://fred.bugsplat.com/post/bp/crash/crashpad.php";
// Metadata that will be posted to the server with the crash report map
map<StringType, StringType> annotations;
annotations["format"] = "minidump"; // Required: Crashpad setting to save crash as a minidump
annotations["product"] = "myCrashpadCrasher" // Required: BugSplat appName
annotations["version"] = "1.0.0"; // Required: BugSplat appVersion
annotations["key"] = "Sample key"; // Optional: BugSplat key field
annotations["user"] = "fred#bugsplat.com"; // Optional: BugSplat user email
annotations["list_annotations"] = "Sample comment"; // Optional: BugSplat crash description
// Disable crashpad rate limiting so that all crashes have dmp files
vector<StringType> arguments;
arguments.push_back("--no-rate-limit");
// Initialize Crashpad database
unique_ptr<CrashReportDatabase> database = CrashReportDatabase::Initialize(reportsDir);
if (database == NULL) return false;
// Enable automated crash uploads
Settings *settings = database->GetSettings();
if (settings == NULL) return false;
settings->SetUploadsEnabled(true);
// Start crash handler
CrashpadClient *client = new CrashpadClient();
bool status = client->StartHandler(handler, reportsDir, metricsDir, url, annotations, arguments, true, true);
return status;
}
You'll also need to generate sym files using dump_syms. You can upload sym files to a remote server using symupload. Finally you can symbolicate the minidump using minidump_stackwalk.
I have just got word from one of the devs that its not ready yet...https://groups.google.com/a/chromium.org/forum/#!topic/crashpad-dev/GbS_HcsYzbQ
Related
when execute command:
composer archive create --sourceType dir --sourceName /home/testuser/test-network -a /home/testuser/test-network/dist/test-network.bna
I get an error:
Creating Business Network Archive
Looking for package.json of Business Network Definition
Input directory: /home/testuser/test-network
/usr/lib/node_modules/composer-cli/node_modules/yargs/yargs.js:1079
else throw err
^
Error: namespace already exists
at ModelManager.addModelFiles (/usr/lib/node_modules/composer-cli/node_modules/composer-common/lib/modelmanager.js:234:31)
at Function.fromDirectory (/usr/lib/node_modules/composer-cli/node_modules/composer-common/lib/businessnetworkdefinition.js:493:43)
at Function.handler (/usr/lib/node_modules/composer-cli/lib/cmds/archive/lib/create.js:80:42)
at Object.module.exports.handler (/usr/lib/node_modules/composer-cli/lib/cmds/archive/createCommand.js:31:30)
at Object.self.runCommand (/usr/lib/node_modules/composer-cli/node_modules/yargs/lib/command.js:233:22)
at Object.Yargs.self._parseArgs (/usr/lib/node_modules/composer-cli/node_modules/yargs/yargs.js:990:30)
at Object.self.runCommand (/usr/lib/node_modules/composer-cli/node_modules/yargs/lib/command.js:204:45)
at Object.Yargs.self._parseArgs (/usr/lib/node_modules/composer-cli/node_modules/yargs/yargs.js:990:30)
at Object.get [as argv] (/usr/lib/node_modules/composer-cli/node_modules/yargs/yargs.js:927:19)
at Object.<anonymous> (/usr/lib/node_modules/composer-cli/cli.js:58:5)
I have changed the files to build the network and even I get the error with the example files:
File /home/testuser/test-network/lib/logic.js:
function sampleTransaction(tx) {
// Save the old value of the asset.
var oldValue = tx.asset.value;
// Update the asset with the new value.
tx.asset.value = tx.newValue;
// Get the asset registry for the asset.
return getAssetRegistry('org2.acme.sample2.SampleAsset')
.then(function (assetRegistry) {
// Update the asset in the asset registry.
return assetRegistry.update(tx.asset);
})
.then(function () {
// Emit an event for the modified asset.
var event = getFactory().newEvent('org2.acme.sample2', 'SampleEvent');
event.asset = tx.asset;
event.oldValue = oldValue;
event.newValue = tx.newValue;
emit(event);
});
}
File /home/testuser/test-network/test.cto:
namespace org2.acme.sample2
asset SampleAsset identified by assetId {
o String assetId
--> SampleParticipant owner
o String value
}
participant SampleParticipant identified by participantId {
o String participantId
o String firstName
o String lastName
}
transaction SampleTransaction {
--> SampleAsset asset
o String newValue
}
event SampleEvent {
--> SampleAsset asset
o String oldValue
o String newValue
}
I have tried to change the namespace too and I got the same error
ok, its because you have multiple .cto files (ie in your directory) with the same namespace contained in them (perhaps you are making different editions or wanting multiple .cto files). The archive command checks for namespaces in each CTO). Each busness network model has a single namespace. All resource declarations within the file are implicitly in this namespace. You can have multiple .cto files if you want to break it out - but don't repeat the namespace in the additional files. You can even, if you want, have multiple model files, with different namespaces (if that's what you want of course).
See https://hyperledger.github.io/composer/reference/cto_language.html
Otherwise, suggest to move any 'editions' of the CTO files with the same namespace name.
Then try build your .bna file again.
I am trying to create a release without mapping a existing build in TFS/VSTS and get data display in release summary once it is completed. in plain text steps are following
Release -> Empty Release Definition -> Add build task - > Create Release -> Deploy -> View Data in Summary Section
Summary data are view-able as expected without any issues with following two scenarios
Build - > Create build definition -> Add task - > Save and Queue build – Build Success - > View Summary Data
Release -> Empty Release Definition -> Link pre-defined Build definition -> Create Release -> provide successfully ran build version -> View Summary data.
As As per our understanding the issue occurs when we retrieving artifacts of the given release. We can retrieve results for builds but fail to do the same for releases. Below is the sample code we use to read release data. It will be much helpful if you can provide us guidance on retrieving artifacts details for given release. Right now we use following code in the client side for retrieving release artifacts but it complains release.artifacts is undefined. We have verified that the attachment file is saved to the given file location.
var c = VSS.getConfiguration();
c.onReleaseChanged(function (release) {
release.artifacts.forEach(function (art) {
var buildid = art.definitionReference.version.id;
// rest of the code is removed here
});
});
below are the references we followed to find solution,
https://github.com/Microsoft/vsts-tasks/blob/master/docs/authoring/commands.md
How to retrieve build attachment from VSTS release summary tab
https://github.com/Microsoft/vsts-extension-samples/blob/master/release-management/deployment-status-enhancer/scripts/main.js
https://github.com/Microsoft/vsts-extension-samples/blob/master/release-management/deployment-status-enhancer/index.html
https://www.visualstudio.com/en-us/docs/integrate/extensions/reference/client/core-sdk#method_getConfiguration
I was able to figure out an answer for this issue. I am herewith sharing same for others reference.
If we don’t link an artifact(build definition), then the artifacts for the release/release definition will not be filled with the data, so we won’t be able to refer to the attachment that got uploaded as part of the build.
Hence as per current API implementation, Below are the steps to follow to achieve this requirenment.
Writing data in to log while extension run as build task
Read above data once build completes (in client side)
Display retrieved (processed if required) data in release tab.
I found below code which explains retrieving data from log (reference : https://github.com/Dynatrace/Dynatrace-AppMon-TFS-Integration-Plugin/blob/master/src/enhancer/dynatrace-testautomation.ts)
public initialize(): void {
super.initialize();
// Get configuration that's shared between extension and the extension host
var sharedConfig: TFS_Release_Extension_Contracts.IReleaseViewExtensionConfig = VSS.getConfiguration();
if(sharedConfig) {
// register your extension with host through callback
sharedConfig.onReleaseChanged((release: TFS_Release_Contracts.Release) => {
// get the dynatraceTestRun attachment from the build
var rmClient = RM_Client.getClient();
var LOOKFOR_TASK = "Collect Dynatrace Testrun Results";
var LOOKFOR_TESTRUNDATA = "\"testRunData\":";
var drcScope = this;
release.environments.forEach(function (env) {
var _env = env;
//project: string, releaseId: number, environmentId: number, taskId: number
rmClient.getTasks(VSS.getWebContext().project.id, release.id, env.id).then(function(tasks){
tasks.forEach(function(task){
if (task.name == LOOKFOR_TASK){
rmClient.getLog(VSS.getWebContext().project.id, release.id, env.id, task.id).then(function(log){
var iTRD = log.indexOf(LOOKFOR_TESTRUNDATA);
if (iTRD > 0){
var testRunData = JSON.parse(log.substring(iTRD + LOOKFOR_TESTRUNDATA.length, log.indexOf('}',iTRD)+1));
drcScope.displayDynatraceTestRunData.bind(drcScope);
drcScope.displayDynatraceTestRunData(_env.name, testRunData);
}
});
}
});
});
});
});
sharedConfig.onViewDisplayed(() => {
VSS.resize();
});
}
I have a website with a form to upload files. I want to automatically sign in and upload image files once a changes are scene on my local folder on my computer. Can any guidance be provided in the matter.
As per my understanding, you can write a task for this purpose, which can run lets say every hour and check if any changes are made in directory then upload these files on you app.
I don't know what kind of system you are working on but you could do something like this. If you are on a linux system you could use the watch command to track the activity of the directory of choice. The what you could do is use something like Mechanize in a ruby script that gets triggered by the watch command that will then go and submit the form and upload the file for you by selecting the file with the latest creation date.
I realize that it says ruby on rails in the post, but this answer is just as legitimate as writing a solution in Ruby (and a bit easier/faster)
Using Qt C++ to do this, then you could do something like this:
(untested, you'll have to make adjustments for your exact situation)
Overview of Code:
Create a program that loops on a Timer every 20 minutes and goes through the entire directory that you specify with WATCH_DIR, and if it finds any files in that directory which were modified in between the time that the loop last ran (or after the program starts but before the first loop is run), then it uploads that exact file to whatever URL you specify with UPLOAD_URL
Then create a file called AutoUploader.pro and a file called main.cpp
AutoUploader.pro
QT += core network
QT -= gui
CONFIG += c++11
TARGET = AutoUploader
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
SOURCES += main.cpp
main.cpp
#include <QtCore/QCoreApplication>
#include <QtCore/qglobal.h>
#include <QDir>
#include <QDirIterator>
#include <QNetworkAccessManager>
#include <QTimer>
#include <QByteArray>
#include <QHash>
#define WATCH_DIR "/home/lenny/images"
#define UPLOAD_URL "http://127.0.0.1/upload.php"
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
MainLoop loop(WATCH_DIR);
return a.exec();
}
class MainLoop : public QObject {
Q_OBJECT
public:
MainLoop(QString _watch_directory = qApp->applicationDirPath()) {
watch_directory = _watch_directory;
// the ACTION="" part of the upload form
website_upload_url = UPLOAD_URL;
/* 20 minutes
20 * 60 * 1000 = num of milliseconds that makes up
20 mins = 1200000 ms */
QTimer::singleShot(1200000, this, SLOT(check_for_updates()));
/* this will stop any file modified before you ran this program from
being uploaded so it wont upload all of the files at runtime */
program_start_time = QDateTime::currentDateTime();
}
QDateTime program_start_time;
QString watch_directory;
QString website_upload_url;
// hash table to store all of the last modified times for each file
QHash<QString, QDateTime> last_modified_time;
~MainLoop() { qApp->exit(); }
public slots:
void check_for_updates() {
QDirIterator it(QDir(watch_directory));
/* loop through all file in directory */
while (it.hasNext()) {
QFileInfo info(it.next());
/* check to see if the files modified time is ahead of
program_start_time */
if (info.lastModified.msecsTo(program_start_time) < 1) {
upload_file(info.absoluteFilePath());
}
}
/* set program_start_time to the current time to catch stuff next
time around and then start a timer to repeat this command in
20 minutes */
program_start_time = QDateTime::currentDateTime();
QTimer::singleShot(1200000, this, SLOT(check_for_updates()));
}
/* upload file code came from
https://forum.qt.io/topic/11086/solved-qnetworkaccessmanager-uploading-files/2
*/
void upload_file(QString filename) {
QNetworkAccessManager *am = new QNetworkAccessManager(this);
QString path(filename);
// defined with UPLOAD_URL
QNetworkRequest request(QUrl(website_upload_url));
QString bound="margin"; //name of the boundary
//according to rfc 1867 we need to put this string here:
QByteArray data(QString("--" + bound + "\r\n").toAscii());
data.append("Content-Disposition: form-data; name=\"action\"\r\n\r\n");
data.append("upload.php\r\n");
data.append("--" + bound + "\r\n"); //according to rfc 1867
data.append(QString("Content-Disposition: form-data; name=\"uploaded\"; filename=\"%1\"\r\n").arg(QFileInfo(filename).fileName()));
data.append(QString("Content-Type: image/%1\r\n\r\n").arg(QFileInfo(filename).suffix())); //data type
QFile file(path);
if (!file.open(QIODevice::ReadOnly))
return;
data.append(file.readAll()); //let's read the file
data.append("\r\n");
data.append("--" + bound + "--\r\n");
request.setRawHeader(QString("Content-Type").toAscii(),QString("multipart/form-data; boundary=" + bound).toAscii());
request.setRawHeader(QString("Content-Length").toAscii(), QString::number(data.length()).toAscii());
this->reply = am->post(request,data);
connect(this->reply, SIGNAL(finished()), this, SLOT(replyFinished()));
}
void replyFinished() {
/* perform some code here whenever a download finishes */
}
};
Before running this program, make sure to read through it completely and make the necessary changes by reading the comments and the post -- also you may have to install the qt framework depending on your platform
Anyways, the final step is to run qmake to create the project makefile and finally, make to build the binary.
Obviously the last steps are different depending on what system you are using.
... This program will continue to run... essentially forever until you close it.... uploading changed files every 20 minutes
Hope this helps...
In our Grails web applications, we'd like to use external configuration files so that we can change the configuration without releasing a new version. We'd also like these files to be outside of the application directory so that they stay unchanged during continuous integration.
The last thing we need to do is to make sure the external configuration files exist. If they don't, then we'd like to create them, fill them with predefined content (production environment defaults) and then use them as if they existed before. This allows any administrator to change settings of the application without detailed knowledge of the options actually available.
For this purpose, there's a couple of files within web-app/WEB-INF/conf ready to be copied to the external configuration location upon the first run of the application.
So far so good. But we need to do this before the application is initialized so that production-related modifications to data sources definitions are taken into account.
I can do the copy-and-load operation inside the Config.groovy file, but I don't know the absolute location of the WEB-INF/conf directory at the moment.
How can I get the location during this early phase of initialization? Is there any other solution to the problem?
There is a best practice for this.
In general, never write to the folder where the application is deployed. You have no control over it. The next rollout will remove everything you wrote there.
Instead, leverage the builtin configuration capabilities the real pro's use (Spring and/or JPA).
JNDI is the norm for looking up resources like databases, files and URL's.
Operations will have to configure JNDI, but they appreciate the attention.
They also need an initial set of configuration files, and be prepared to make changes at times as required by the development team.
As always, all configuration files should be in your source code repo.
I finally managed to solve this myself by using the Java's ability to locate resources placed on the classpath.
I took the .groovy files later to be copied outside, placed them into the grails-app/conf directory (which is on the classpath) and appended a suffix to their name so that they wouldn't get compiled upon packaging the application. So now I have *Config.groovy files containing configuration defaults (for all environments) and *Config.groovy.production files containing defaults for production environment (overriding the precompiled defaults).
Now - Config.groovy starts like this:
grails.config.defaults.locations = [ EmailConfig, AccessConfig, LogConfig, SecurityConfig ]
environments {
production {
grails.config.locations = ConfigUtils.getExternalConfigFiles(
'.production',
"${userHome}${File.separator}.config${File.separator}${appName}",
'AccessConfig.groovy',
'Config.groovy',
'DataSource.groovy',
'EmailConfig.groovy',
'LogConfig.groovy',
'SecurityConfig.groovy'
)
}
}
Then the ConfigUtils class:
public class ConfigUtils {
// Log4j may not be initialized yet
private static final Logger LOG = Logger.getGlobal()
public static def getExternalConfigFiles(final String defaultSuffix, final String externalConfigFilesLocation, final String... externalConfigFiles) {
final def externalConfigFilesDir = new File(externalConfigFilesLocation)
LOG.info "Loading configuration from ${externalConfigFilesDir}"
if (!externalConfigFilesDir.exists()) {
LOG.warning "${externalConfigFilesDir} not found. Creating..."
try {
externalConfigFilesDir.mkdirs()
} catch (e) {
LOG.severe "Failed to create external configuration storage. Default configuration will be used."
e.printStackTrace()
return []
}
}
final def cl = ConfigUtils.class.getClassLoader()
def result = []
externalConfigFiles.each {
final def file = new File(externalConfigFilesDir, it)
if (file.exists()) {
result << file.toURI().toURL()
return
}
final def error = false
final def defaultFileURL = cl.getResource(it + defaultSuffix)
final def defaultFile
if (defaultFileURL) {
defaultFile = new File(defaultFileURL.toURI())
error = !defaultFile.exists();
} else {
error = true
}
if (error) {
LOG.severe "Neither of ${file} or ${defaultFile} exists. Skipping..."
return
}
LOG.warning "${file} does not exist. Copying ${defaultFile} -> ${file}..."
try {
FileUtils.copyFile(defaultFile, file)
} catch (e) {
LOG.severe "Couldn't copy ${defaultFile} -> ${file}. Skipping..."
e.printStackTrace()
return
}
result << file.toURI().toURL()
}
return result
}
}
I have a running Electron app and is working great so far. For context, I need to run/open a external file which is a Go-lang binary that will do some background tasks.
Basically it will act as a backend and exposing an API that the Electron app will consume.
So far this is what i get into:
I tried to open the file with the "node way" using child_process but i have fail opening the a sample txt file probably due to path issues.
The Electron API expose a open-file event but it lacks of documentation/example and i don't know if it could be useful.
That's it.
How i open an external file in Electron ?
There are a couple api's you may want to study up on and see which helps you.
fs
The fs module allows you to open files for reading and writing directly.
var fs = require('fs');
fs.readFile(p, 'utf8', function (err, data) {
if (err) return console.log(err);
// data is the contents of the text file we just read
});
path
The path module allows you to build and parse paths in a platform agnostic way.
var path = require('path');
var p = path.join(__dirname, '..', 'game.config');
shell
The shell api is an electron only api that you can use to shell execute a file at a given path, which will use the OS default application to open the file.
const {shell} = require('electron');
// Open a local file in the default app
shell.openItem('c:\\example.txt');
// Open a URL in the default way
shell.openExternal('https://github.com');
child_process
Assuming that your golang binary is an executable then you would use child_process.spawn to call it and communicate with it. This is a node api.
var path = require('path');
var spawn = require('child_process').spawn;
var child = spawn(path.join(__dirname, '..', 'mygoap.exe'), ['game.config', '--debug']);
// attach events, etc.
addon
If your golang binary isn't an executable then you will need to make a native addon wrapper.
Maybe you are looking for this ?
dialog.showOpenDialog refer to: https://www.electronjs.org/docs/api/dialog
If using electron#13.1.0, you can do like this:
const { dialog } = require('electron')
console.log(dialog.showOpenDialog({ properties: ['openFile', 'multiSelections'] }))
dialog.showOpenDialog(function(file_paths){
console.info(file_paths) // => this gives the absolute path of selected files.
})
when the above code is triggered, you can see an "open file dialog" like this (diffrent view style for win/mac/linux)
Electron allows the use of nodejs packages.
In other words, import node packages as if you were in node, e.g.:
var fs = require('fs');
To run the golang binary, you can make use of the child_process module. The documentation is thorough.
Edit: You have to solve the path differences. The open-file event is a client-side event, triggered by the window. Not what you want here.
I was also totally struggling with this issue, and almost seven years later the documentation is quite not clear what's the case with Linux.
So, on Linux it falls under Windows treatment in this regard, which means you have to look into process.argv global in the main processor, the first value in the array is the path that fired the app. The second argument, if one exist, is holding the path that requested the app to be opened. For example, here is the output for my test case:
Array(2)
0: "/opt/Blueprint/b-test"
1: "/home/husayngonzalez/2022-01-20.md"
length: 2
So, when you're creating a new window, you check for the length of process.argv and then if it was more than 1, i.e. = 2 it means you have a path that requested to be opened with your app.
Assuming you got your application packaged with the ability to process those files, and also you set the operating system to request your application to open those.
I know this doesn't exactly meet your specification, but it does cleanly separate your golang binary and Electron application.
The way I have done it is to expose the golang binary as a web service. Like this
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
//TODO: put your call here instead of the Fprintf
fmt.Fprintf(w, "HI there from Go Web Svc. %s", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/api/someMethod", handler)
http.ListenAndServe(":8080", nil)
}
Then from Electron just make ajax calls to the web service with a javascript function. Like this (you could use jQuery, but I find this pure js works fine)
function get(url, responseType) {
return new Promise(function(resolve, reject) {
var request = new XMLHttpRequest();
request.open('GET', url);
request.responseType = responseType;
request.onload = function() {
if (request.status == 200) {
resolve(request.response);
} else {
reject(Error(request.statusText));
}
};
request.onerror = function() {
reject(Error("Network Error"));
};
request.send();
});
With that method you could do something like
get('localhost/api/somemethod', 'text')
.then(function(x){
console.log(x);
}