Inconsistency for decreasing loss - machine-learning

[x] Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
[x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
[x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
Hello everyone, I have been using this script Python script
The problem is that I am unable to produce similar result everytime.
Sometimes, I can produce a similar result (loss of 0.3~ within 500 epochs) but sometimes I still get a loss of 3.x after 1500epochs. I am not sure if this is a bug or this is because the algorithm just stuck at a local minimum .
Moreover, after I adjusted the close price (without dividing by 100) and I increased the learning rate for 100x, the problem still exists and it stuck at a loss of 30000. Do you guys think there is anything can do to improve the model?

By normalizing the featureset, it works fine now.

Related

Tidymodels produces different Results

I built eight different machine learning models back in October 2021 using Tidymodels. Back then, I carefully saved the input data, my output and my .R files. Now, when I run my codes again, I get totally different outputs. I have updated my packages since October, and the only explanation that I can think of is that there has been some updates that cause the discrepancies. O wonder if others have experienced such issues and if they have been able to resolve it.
Edit: I am using the same machine and the same random seeds.
Following #Julia Silge's clue, I installed an older version of the rsample package (version 0.0.9) and then I could reproduce all my results.

IpoptSolver has not been compiled as part of this binary

I have freshly installed drake using pip and while going through tutorials about mathematical programs, I cannot use the example with IpoptSolver. I am getting this error message, however, I cannot find information in the documentation how to compile it. Could you point me to the right direction?
ValueError: IpoptSolver cannot Solve because IpoptSolver::available() is false, i.e., IpoptSolver has not been compiled as part of this binary. Refer to the IpoptSolver class overview documentation for how to compile it.
For the pip install instructions as of today, we see a link to still has some known issues; see issue #15954
Following that, we find a link to Enable IPOPT in pip wheel #15971, an issue which was only resolved 4 hours ago.
So yes, in recent past, the IPOPT solver was not yet shipped in the pip wheels. However, it will be included as of the v0.39.0 release within the next day or two. (Edit: Drake v0.39.0 has been released, and has IPOPT available now.)
In the meantime, possibly some other solver like NloptSolver or SnoptSolver would be able to solve the program.
The list of other solvers is here: https://drake.mit.edu/doxygen_cxx/group__solvers.html.

Do I need to install all these 3 libs or just one? OpenCV

Have read many SOF questions about OPENCV, they all tell me what each of them is.
When I see so many different but similar names, I feel great confusion.
I thought I need to just install opencv-python, but then out of sudden, I found there is another one which looks correct to me as well, py-opencv.... Gosh which one is the correct?!!!
So my question is:
In a newly created conda environment, I'd like to add open-cv to use in my Python Jupyter Notebook. Do I need to install all of them or just one?
Them: opencv, Python-OpenCV(opencv-python), py-opencv, libopencv
One: py-opencv or opencv-python

how to minimize edit-build-test cycle for developing a single clang tool?

I am trying to build a clang static checker that will help me identify path(s) to a line in code where variables of type "struct in_addr" (on linux) are being modified in C/C++ progrmas. So I started to build a tool that will find lines where "struct in_addr" variable are being modified (and will then attempt to trace path to such places). For my first step I felt I only need to work with AST, I would work w/ paths as step 2.
I started with the "LoopConvert" example. I understand it and am making some progress, but I can't find how to only make the "LoopConvert" example in he cmake|build eco system. I had used
cmake -DCMAKE_BUILD_TYPE=Debug -G "Unix Makefiles"
when I started. I find that when I edit my example and rebuild (by typing "make" in the build directory) the build systems checks everything, seems to rebuilding quite a bit though nothing else has changed but 1 line in LoopConvert.cpp, and takes forever.
How can only rebuild the one tool I am working on? If I can shorten my edit-compile-test cycle I feel I can learn more quickly.
In a comment you say that switching to Ninja helped. I'll add my advice when using make.
First, as explained in an answer to Building clang taking forever, when invoking cmake, pass -DCMAKE_BUILD_TYPE=Release instead of -DCMAKE_BUILD_TYPE=Debug. The "debug" build type produces an enormous clang executable (2.2 GB) whereas "release" produces a more reasonable size (150 MB). The large size causes the linker to spend a long time and consume a lot of memory (of order 10 GB?), possibly leading to swapping (it did on my 16 GB VM).
Second, and this more directly addresses the excessive recompilation, what I do is save the original output of make VERBOSE=1 to a file, then copy the command lines that are actually relevant to what I am changing (compile, archive, link) into a little shell script that I run thereafter. This is of course fragile, but IMO the substantially increased speed is worth it during my edit-compile-debug inner loop.
I have not tried Ninja yet, but have no reason to doubt your claim that it works much better. Although, again, I think you want build type to be release for faster link time.

Python 2.7: Issue with cv2 (opencv) DLL load failed: The specified procedure could not be found. (Windows XP)

I have been requested to make a Python file in order to capture screenshots at regular intervals on a minimal Windows XP machine (sadly), for obvious compatibility reasons I used Python 2.7.10 X86.
The application works as expected on Windows 10 using the same Python version (32 Bits), but does not work on the Windows XP machine.
opencv-python neither Pillow are working, they both indicate that the specified procedure can't be found.
I think that the issue is related to missing dependencies, especially when the XP machine is minimalist...
To be more precise, the python file crashes at "import cv2" line.
If there are any other screenshots and image comparison libraries I'd be glad to know!
The last resort would be going to a lower level, finding the value from it's memory address (from the app), saving it and comparing the old value with the new one, however I wonder if this is even possible with Python...
Thank you for reading, any help appreciated!
EDIT :
Sorry I have forgot to mention that I need to make a comparison with the previous image.
I used : ImageChops.difference(a, b)
Finally, I went for using Pillow instead of cv2 and especially using pip install Pillow==4.0 to resolved the DLL Load Failed (they were both concerned) but I end up having a new conflict which is :
IOError: encoder zip not available
I am making my investigations, but for more details neither doing :
image = grab()
image.save("captures/capture.png")
or
image = pyautogui.screenshot("captures/capture.png")
produces any results so far... :(
Change
image = pyautogui.screenshot("captures/capture.png")
to
image = pyautogui.screenshot(r"captures/capture.png")

Resources