I am trying to get CircleCI to work with my electron app but I can't understand how to stop the timeout error.
You can look at the app here: https://github.com/sauravyash/OutFlux
It fails on the npm test stage of the build with:
> outflux#1.0.0 test /home/ubuntu/OutFlux
> electron .
Xlib: extension "RANDR" missing on display ":99".
Xlib: extension "RANDR" missing on display ":99".
command ((npm :test)) took more than 10 minutes since last output
I'm new to the whole idea of CI so bear with me if the answer is obvious.
In case anyone sees this, you must understand that running electron by itself isnt testing anything, which is the point of CI. You should use something like Spectron to test if your app is successful in headless testing (testing without a real display).
https://github.com/electron/spectron
Related
I'm trying to follow other people who had a similar issue like this one
Electron-Builder include external folder I wish i could be more specific on what my problem is, the reason is that i dont know whats wrong.
I am making a react app which has a server with an sqlite db and im trying to use electron.js to make it into an installable/executable
here is my dummy repo https://github.com/Juan321654/electron_react_with_build_installer_sqlite_db the master branch was just how to make electron work with react, the server branch is the one that i need help with
you can clone and just do npm i, npm run start to launch executable. npm run build to build
the code works fine in development mode and even after i make the build project with electron i can launch the executable and it works fine and it reads the data from the database, but as soon as i take the dist folder out of the project to send to someone or install the software, it stops working and it loads the app, but it does not read the data from the server/db, I am not sure if its missing node modules or the server folder, or maybe if im missing some kind of command in my scripts in the package.json?
I have Jest tests that are running against the dockerized Neo4j Database, and sometimes they fail on CircleCI. The error message for all 25+ of them is :
thrown: "Exceeded timeout of 5000 ms for a hook.
#*******api: Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
Since they fail sometimes, like once in 25 runs, I am wondering if jest.setTimeout will solve the issue. I was able to fail them locally by setting jest.setTimeout(10), but I am not sure how to debug this even more, or whether something else could be an issue here aside from a small timeout (default 5000). I would understand if 1/25 or a few fails, or if all other suits fail, but only a single file with all tests within that file is failing. And it is always the same file, never some other file for this reason ever.
Additional information, locally, that single file runs in less than a 1000ms connected to the staging database which is huge compared to the dockerized that has only a few files at the time of running
For anyone who sees this, I was able to solve this by adding the --maxWorkers=2 flag to the test command in my CircleCI config. See here for details: https://support.circleci.com/hc/en-us/articles/360005442714-Your-test-tools-are-smart-and-that-s-a-problem-Learn-about-when-optimization-goes-wrong-
Naman's answer is perfect! I couldn't believe it but it really solved my problem. Just to be extra clear on how to do it:
I change the test script from my package.json from jest to jest --maxWorkers=2. Then I pushed and it did solve my error.
I'm following the workshop provided by aws, https://cdkworkshop.com/20-typescript/30-hello-cdk/300-cdk-watch.html.
When I issue
$ cdk watch
After the command is succeeded, it never returns. I can see that the new function is deployed correctly on the aws console. But it seems like it didn't finish normally.
When I issue
$ cdk deploy --hotswap
I get no error. It deploys and returns cleanly.
Anyone knows or experiences the same?
This is the expected behaviour. "watch mode (cdk deploy --watch, or cdk watch for short) continuously monitors your CDK app's source files and assets for changes and immediately performs a deployment of the specified stacks when a change is detected".
Watch mode is a common CLI idiom. Typescript's tsc --watch, works similarly, for instance, continuously compiling to js as you make changes.
I'm running a CI machine with the Xcode.
The tests are triggered using fastlane gym. I see this line in the output:
2019-05-27 16:04:28.417 xcodebuild[54605:1482269] [MT]
IDETestOperationsObserverDebug: (A72DBEA3-D13E-487E-9D04-5600243FF617)
Finished requesting crash reports. Continuing with testing.
This operation takes some time (about a minute) to complete. As far, as I understand, the Xcode requests crash reports from Apple to show in the "Organizer" window.
Since this is a CI machine, the crash reports will never be viewed on it and this step could be skipped completely how can I skip it?
Your mileage may vary, but after setting up a new machine with the following configuration, I encountered the same issue OP details:
macOS 10.15.2
Xcode 11.3
fastlane 2.139.0
Simulators # 13.3
When I run my fastlane test with 3 devices, I wind up at the following message and was sitting idle for about four minutes before I terminated it:
I then took the steps that I outlined in the comment to OP:
fastlane scan init
Edit my scanfile to look like this
I initially set disable_concurrent_testing(false), and when I ran the tests through fastlane, I got stuck again. Changing the value to disable_concurrent_testing(true) has allowed the tests to now run on my machine.
I think blaming "Finished requesting crash reports. Continuing with testing" may be a red herring. I was having several jobs stop at this step, but when I looked closer (I ran the lane locally and tailed the logs) I saw that my test was failing due to something else. It looks like Fastlane doesn't correctly show how long this step takes, in fact, I think if you're seeing that message, the process is already complete, and your tests are running. That changing concurrency fixes it for you may indicate your tests are failing due to a race condition.
So, anyway. Install fastlane locally, run your lane locally, tail -f the build output as well as the log file and see if the problem is revealed there. It was for me, but, as with everything, YMMV.
I’m building a solution which requires a batch file to be run after the build (there's a sequence in the workflow for this). TFS flags the build as partially succeeded, but there’s no error in the log even in full verbose mode ("diagnostic"). I’m checking the errorlevel after each line in the batch file and it’s always 0. I’ve also tested redirecting stdout and stderr in a file after each line and there’s no clue there.
It’s got nothing to do with unit tests because I’m skipping them for the time being.
I’ve noticed that usually when an error occurs in a batch file (e.g. file not found) there’s a visual cue to indicate the error and this matches the partially succeeded status. But I don’t see any visual cue.
So how can TFS decide that the build is only partially succeeded?
Thank you,
Solved.
It turns out the GetImpactedTests activity is throwing an exception (I can see it in the event viewer of the TFS machine), but it doesn't show at all in the build log.
I'm guessing that this exception makes the build partially succeeded (because the compilation part succeeded) but I couldn't see the assignment explicitly in the buid log. When I bypass the impact analysis (either by setting Analyze Test Impact to False or by removing the GetImpactedTests activity altogether), the error does not occur.
We experiment something similar here using the Lab Workflow (to kick our CodedUI tests). Different build template, same symptoms.
I have noticed that the build process reports that it partially succeeded, highlighting what seems to be a successful step in the deploy script (batch file).
The command is question is a command to install our mobile app on a mobile device (in order to test it at night):
adb install -d -r test.apk
I thought about looking the errorlevel right after running the adb command but the errorlevel was 0.
Then I thought that maybe the command is sending its output to stderr and found out this article on the android open source project, which confirms my hypothesis.
Following is my fix:
adb install -r -d test.apk 2>&1
Appending 2>&1 simply redirects stderr to stdout and now my deploy script does not report an error anymore and the build now succeeds (when all tests pass!).
Conclusion: When a script writes anything to stderr, the build workflow will report it as an error (partial success since it does not prevent execution of the workflow).
I know this is not your particular issue but since we had the same symptoms, I thought the stderr information could help somebody else find out the reason why their build process is reporting a partial success even though everything seems to work.