Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Can we compile and run a C/C++ program completely in the memory without need of disk access?
Normally one writes a C/C++ program in editor, saves to disk(in file) and then compiles it. Compiling creates executable file on disk, which one runs to see if it works correctly. What I want to do is to write a program, save to file, invoke gcc/g++ in such a way that it creates machine code but directly loads that on memory to run. So once I am satisfied with program output, I can again invoke gcc/g++(as done usually) to create executable file on disk.
You can use gcc -pipe to avoid some temporary files. And you can pipe the source code into GCC by gcc -xc -. You can even have GCC write its output to stdout:
echo 'int main() {}' | gcc -xc - -S -o -
Once you've done all that, you are left with a couple issues: where to get GCC from (usually it's on disk!), and where to get the #include and library files you need (ditto). You could install GCC (it comes with a standard library) onto a RAM disk (look at /dev/shm), but is that really what you're trying to accomplish?
You aren't going to speed up compilation this way. The GCC docs say as much regarding -pipe. If you want faster compilation, improve your source code, implement a parallel build system (make -j), and/or use a faster linker, like Gold instead of classic BFD.
Yes, you can. An easy way is to use tmpfs.
Related
massif doesn't show any function names for functions which are in a lib and this lib is closed by dlclose().
If I remove dlclose(), and run the recompile and execute program I can see the symbols. Is there a way to know the function names without changing the source code?
The new version of valgrind (3.14) has an option that instructs valgrind to keep the symbols of dlclose'd libraries :
--keep-debuginfo=no|yes Keep symbols etc for unloaded code [no]
This allows saved stack traces (e.g. memory leaks)
to include file/line info for code that has been
dlclose'd (or similar)
However, massif does not make use of this information.
You might obtain a usable heap reporting profile by doing:
valgrind --keep-debuginfo=yes --:xtree-leak=yes
and then visualise the heap memory using e.g. kcachegrind.
This question already has an answer here:
Building clang taking forever
(1 answer)
Closed 4 years ago.
There was a problem when I've tried to build clang with ninja.I've executed all commands one after another from the link:
http://clang.llvm.org/docs/LibASTMatchersTutorial.html
but after running ninja where the tutorial says "Okay.Now we’ll build Clang!" it takes 2 hours to build half of the objects and after that OS stuck and I couldn't even move the cursor.I did the job on both my laptop and PC but the result was the same.What is attract my attention is that, the size of the folder is so huge (18.3GB).
Is there any way to solve the problem?
I have already answered the same question on StackOverflow here. I will suggest a deeper search in future before asking the same question.
Including information here in case link is lost. What is happening is that building clang in debug mode (that's by default) a lot of debug information is also being generated for each compilation unit the file sizes are becoming big.
The solution is to turn off all the debug info that's been attached by default. You probably are not going to debug clang, so won't need it. So instead of just doing this
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON
What you should do is
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release
All the other steps remain the same.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am using Cordova to create a mobile application on iOS. I want to upload multiple files to server but instead of sending them one after another. I want to compress these files into one '.zip' file and upload it to server.
I searched for a Cordova plugin, but I found only these plugins that doesn't solve my problem:
Icenium/cordova-plugin-zip to zip and unzip files on iOS but there is no documentation or example about how using it.
jjdltc/jjdltc-cordova-plugin-zip to zip and unzip files on Android platform only.
Please, help me to find a plugin that zip files and folders on iOS, or give me an example about using the first plugin.
We have managed to unzip, modify and zip back files using the JSZip javascript API and the cordova file plugin. It should also work to create a zip from scratch. There isn't a real need to use native code for the zipping action (although it is most likely faster), only to read the files to zip and write the zip file. Hence, it is fine to use a javascript API that wasn't designed for cordova in particular.
Process
If you want to try the JSZip + file plugin method, here is a quick outline of how we worked with it :
We use the file plugin to read files as binary. If you are lucky enough to have only text files, you could read as text too, but it's less flexible that way. Note that to read the files, you will need to obtain the File object from their FileEntry which will require navigating in the file system using DirectoryEntrys. If you aren't familiar with the file plugin, take a look at its documentation to do this step.
We create a JSZip object.
Manipulate the JSZip object as you want it. You can create folders within the zip, add files, remove some, modify some content. As you see fit. Their documentation gives simple and good examples.
Generate the zip binary content using the JSZip JSZip#generate() method, specifying the type. If you want to create an actual file with it, we noticed that string and arraybuffer could be written with the file plugin's write method (after creating the file) without code to convert it on iOS, but not uint8array (and we didn't try the other generation types).
Treat the binary as you wish. It is the same as if you had read the binary of an actual zip.
PS: The file plugin has some outdated documentation on cordova.apache.org. While the examples in it can be useful, be aware that some of those are not valid anymore. For instance, resolveLocalFileSystemURI() is now resolveLocalFileSystemURL().
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
I have been using sloccount a lot with Objective-C projects on OSX, never have a problem until recently that I upgraded to OSX 10.9 Mavericks. When I’m trying to run this simple script:
#!/bin/sh
sloccount --duplicates --wide --details WeatherApp > Build/sloccount.sc
I’m getting this:
/Applications/sloccount/compute_sloc_lang: line 52: c_count: command not found
Warning! No 'Total' line in Models/ansic_outfile.dat.
The output file has this:
Creating filelist for Application
Creating filelist for Controllers
Creating filelist for Helpers
Creating filelist for Managers
Creating filelist for Models
Creating filelist for Support
Creating filelist for Views
Categorizing files.
Computing results.
44 objc Application /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Application/AppDelegate.m
11 objc Application /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Application/AppDelegate.h
24 objc Controllers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Controllers/CitiesViewController.m
10 objc Controllers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Controllers/CitiesViewController.h
74 objc Helpers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Helpers/TranslatorHelper.m
47 objc Helpers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Helpers/ValidatorHelper.m
18 objc Helpers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Helpers/ErrorNotificationHelper.h
21 objc Helpers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Helpers/TranslatorHelper.h
14 objc Helpers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Helpers/ValidatorHelper.h
85 objc Managers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Managers/WeatherAPIManager.m
20 objc Managers /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Managers/WeatherAPIManager.h
15 objc Support /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Support/main.m
13 objc Support /Users/ruenzuo/Documents/GitHub/north-american-ironman/WeatherApp/Support/Includes.h
And Sloccount Plugin for Jenkins is unable to parse it.
Any thoughts on that?
I finally end using CLOC (http://cloc.sourceforge.net/) instead of SLOCCount (http://www.dwheeler.com/sloccount/).
I couldn't find a Jenkins plugin for CLOC so I'm using xsltproc to translate CLOC output to SLOCCount output format.
I'm using the following script (https://github.com/Ruenzuo/north-american-ironman/blob/master/Scripts/Sloccount.sh), feel free to use it. You will also need this file (https://github.com/Ruenzuo/north-american-ironman/blob/master/Utils/Sloccount-format.xls).
Hope this helps someone.
One step that may be missing is after you downloaded archive
You need to run make install this builds missing exe(s)or binaries and deploys sloccount utility and man pages to /usr/local/bin
I'm very inexperienced with Linux and the terminal, but I'm trying to learn. I've also never included an external library before. Now I need to include the Boost.Asio library in a program being developed in Ubuntu with G++.
Could someone very kindly and very carefully explain how to go about this, from the beginning?
EDIT:
Expanding on the original question: if I need to send this code to someone else for them to run it on a completely separate machine but in the same environment, how do I take that into account? If this whole process involves literally placing library files into the same folder as the code, do I just send those library files along with the .cpp to this other person?
You have mentioned you are using Ubuntu, so the simplest way to use boost is to first install libboost-all-dev package (from synaptic), which will install everything for you including those that needed to be compiled. Then you just need to use g++ in the usual way.
Please note that whether the version is what you want, if not, you may want to install it yourself. On the other hand, boost is mostly header only library, so you only need to extract the files (right click in Ubuntu...) to a folder and link to it while compiling:
g++ hello_world.cpp -I boost_1_49_0/boost
where the last one specify the path for compiler to find the boost headers (please use absolute path).
If you want to send your program to others, dont copy only some boost files, it does not work because of the dependence. Ask them to install the same environment as you while is easy (just unzip a file...).
I don't know about your specific IDE, or about Boost.Asio specifically, but in general:
Whenever you need to link to a library, there is a file named similar to lib???.a, which you need. You need to pass the -l??? flag to g++ to link to the file.
(I'm not too familiar with the details myself, so there might be other file formats and whatnot too...)
Regarding the edit:
The "right" way would be to just have them download the library themselves, and just pass -l??? to their linker. Including Boost in your source code will make it huge, and for no good reason... it's not like you include the STL in your code, after all.
You don't include the library, but instead you declare a dependency on it. Eg. consider you use autoconf and automake then you would add AX_BOOST_BASE1 to require boost and AX_BOOST_ASIO to require the ASIO libraries. In your Makefile.am file(s) you use BOOST_CPPFLAGS and BOOST_LDFLAGS macros and rely on the ./configure to set them properly. Then whoever consumes your code will have to run the well know ./configure script which will analyze the environment for the location of boost and setup appropriate values in the build environment so that the make succeeds.
Well at least this is the theory. In practice there is a reason the whole thing is better known as autohell. Alternatives exists, like CMake or boost's own bjam. But the synopsis is always the same: you declare the dependency in your build configuration and the destination location that consumes you product has to satisfy the requirement (meaning it has to download/install the required version of boost, in your case). Otherwise you enter into the business of distributing binaries and this is frowned with problems due to richness of platforms/architectures/distributions your application is expected to be deployed in.
Obviously if you use a different build system, like ANT, then you should refer to that build system documentation on how to declare the requirement for boost.
1: ax_boost.m4 is not the only boost detecting m4 library, there are other out there, but is the one documented on the GNU autoconf list of macros