Trying to factor longs in java, out of memory? - memory

public static void main(String[] args){
System.out.println("The largest prime factor of 600851475143 is "+largest(primeFactors(600851475143.0)));
}
public static ArrayList<Integer> factor(double n){
ArrayList<Integer> factors = new ArrayList<Integer>();
for(long i = 1; i<=n; i++){
if(isInt(n/i)){
factors.add((int)i);
}
}
return factors;
}
public static ArrayList<Integer> primeFactors(double n){
ArrayList<Integer> factors = factor(n);
ArrayList<Integer> primes = new ArrayList<Integer>();
for(int f : factors){
if(isPrime(f)){
primes.add(f);
}
}
return primes;
}
public static boolean isInt(double d){
return (d==(int)d);
}
public static boolean isPrime(double d){
return (factor(d).size()<=2);
}
public static long largest(ArrayList<Integer> integers){
int max = 1;
for(int i = 0; i<integers.size(); i++){
if(integers.get(i)>max){
max=integers.get(i);
}
}
return max;
}
Running this code causes my java to complain that it is out of memory? I wasn't expecting this to be a difficult problem to solve, as my code makes sense and works for smaller numbers, but it seems to have trouble with this one. Is there a problem in my code causing it to crash or is it simply that my code(or java in general) is not memory efficient? The largest number it could complete from subtracting numbers off of this larger one was "60085147.0" which it answered correctly. How can I get it to parse a larger number?

You have severe problems with integer overflows. isInt () will never return "true" if d ≥ 2^31. On the other hand, isPrime () will return "true" if d is twice a prime ≥ 2^31.
Out of curiousity, I'd love to know how long this code ran. It should be less than a millisecond, but I suspect your version took a bit longer. (Maybe I got your question wrong. Maybe you are not running out of memory, but out of time).
If you try other Project Euler problems: Do not try to solve the problems literally. You definitely don't need a list of all factors to find the largest prime factor of a number.

Related

Performance of sequential stream

I read the "when to use parallel stream?" by DougLea et.al http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html.
I wonder did any one had a guide lines(do's/ don't dos)/ observations which felt them that old way of coding is better in some cases than sequential stream?
I found one here https://jaxenter.com/java-performance-tutorial-how-fast-are-the-java-8-streams-118830.html
I know it's a abstract question but it will be helpful if somebody can share their experience in performance of seq stream vs java 7 way
I've done this just a few days ago; we had to sum a very large array and was wondering what would be the fastest way to do it - so I measured (don't guess; I've used jmh):
#State(Scope.Thread)
public static class Holder {
#Param({ "1000", "10000", "50000", "100000", "1000000" })
public int howManyEntries;
int array[] = null;
#Setup
public void setUp() {
array = new int[howManyEntries];
for (int i = 0; i < howManyEntries; ++i) {
array[i] = i;
}
}
#TearDown
public void tearDown() {
array = null;
}
}
#Fork(1)
#Benchmark
public int iterative(Holder holder) {
int total = 0;
for (int i = 0; i < holder.howManyEntries; ++i) {
total += holder.array[i];
}
return total;
}
#Fork(1)
#Benchmark
public int stream(Holder holder) {
return Arrays.stream(holder.array).sum();
}
#Fork(1)
#Benchmark
public int streamParallel(Holder holder) {
return Arrays.stream(holder.array).parallel().sum();
}
The winner is always the old style java-7 way.
// 1000=[iterative, stream, streamParallel]
// 10000=[iterative, stream, streamParallel]
// 50000=[iterative, stream, streamParallel]
// 100000=[iterative, stream, streamParallel]
// 1000000=[iterative, stream, streamParallel]
Even for 1 million elements. But the result differs in up to 60 ms - if that bites you or not is entirely your choice.
Streams are not meant for speed, they will not replace the old style, neither do they want to - it could add extra visibility to your code for example.

Is net.sf.saxon.s9api.XsltTransformer designed for one time use?

I don't believe I adequately understand the XsltTransformer class enough to explain why method f1 is superior to f2. In fact, f1 finishes in about 40 seconds, consuming between 750mb and 1gb of memory. I was expecting f2 to be a better solution but it never finishes for the same lengthy list of input files. By the time I kill it, it has processed only about 1000 input files while consuming over 4gb of memory.
import java.io.*;
import javax.xml.transform.stream.StreamSource;
import net.sf.saxon.s9api.*;
public class foreachfile {
private static long f1 (Processor p, XsltExecutable e, Serializer ser, String args[]) {
long maxTotalMemory = 0;
Runtime rt = Runtime.getRuntime();
for (int i=1; i<args.length; i++) {
String xmlfile = args[i];
try {
XsltTransformer t = e.load();
t.setDestination(ser);
t.setInitialContextNode(p.newDocumentBuilder().build(new StreamSource(new File(xmlfile))));
t.transform();
long tm = rt.totalMemory();
if (tm > maxTotalMemory)
maxTotalMemory = tm;
} catch (Throwable ex) {
System.err.println(ex);
}
}
return maxTotalMemory;
}
private static long f2 (Processor p, XsltExecutable e, Serializer ser, String args[]) {
long maxTotalMemory = 0;
Runtime rt = Runtime.getRuntime();
XsltTransformer t = e.load();
t.setDestination(ser);
for (int i=1; i<args.length; i++) {
String xmlfile = args[i];
try {
t.setInitialContextNode(p.newDocumentBuilder().build(new StreamSource(new File(xmlfile))));
t.transform();
long tm = rt.totalMemory();
if (tm > maxTotalMemory)
maxTotalMemory = tm;
} catch (Throwable ex) {
System.err.println(ex);
}
}
return maxTotalMemory;
}
public static void main (String args[]) throws SaxonApiException, Exception {
String usecase = System.getProperty("xslt.usecase");
int uc = Integer.parseInt(usecase);
String xslfile = args[0];
Processor p = new Processor(true);
XsltCompiler c = p.newXsltCompiler();
XsltExecutable e = c.compile(new StreamSource(new File(xslfile)));
Serializer ser = new Serializer();
ser.setOutputStream(System.out);
long maxTotalMemory = uc == 1 ? f1(p, e, ser, args) : f2(p, e, ser, args);
System.err.println(String.format("Max total memory was %d", maxTotalMemory));
}
}
I normally recommend using a new XsltTransformer for each transformation. However, the class is serially reusable (you can perform multiple transformations one after another, but not concurrently). The XsltTransformer keeps certain resources in memory, in case they are needed again: notably, all documents read using the doc() or document() functions. This can be useful, for example, if you want to transform one set of input documents to five different output formats as part of your publishing workflow. But if this reuse of resources doesn't give you any benefits, it merely imposes a cost in memory use, which you can avoid by creating a new transformer each time. The same applies if you use the JAXP interface.

How to read in 8 bytes of data from a DataInputStream and interpreted it as double in Vala

I looking for the equivalent of
java.io.DataInputStream.readDouble() for Vala.
Is it even possible?
Currently I have :
public double data;
public override void load (DataInputStream dis) throws IOError {
data = dis.read_int64 ();
}
But it just converting a int64 to a double which is not what I want.
I've tried all sort of casting and de-referencing, but nothing seems to work.
This worked for me:
int main()
{
int64 foo = 0; // Whatever value you have
double data = *(double*)(&foo); // This is where the "magic" happens
stdout.printf("%f", data);
return 0;
}
Mind you, you may have to set the correct byte order for the conversion to succeed.

How does one obtain data that has been returned from a method? (Java)

This might be a really stupid question but what happens to data that is returned from a method? For example, if I have a method that adds two numbers and I tell it to return the sum, how would I access that information from the place where the method was called?
Assuming your question is related with java.
You could assign the whole method to a new variable.
public class Test {
public static void main(String args[]){
int value1=2;
int value2=5;
int sum=sum(value1,value2);
System.out.println("The sum is :"+ sum);
}
public static int sum(int value1,int value2){
return value1+value2;
}
}
What is actually happening, is that the method signature sum(value1,value2) holds the result of the 2 numbers summation. There is also another way of writing the code inside the method but the result will be the same.
For example:
public class Test {
public static void main(String args[]){
int sum=sum(2,5);
System.out.println("The sum is :"+ sum);
}
public static int sum(int value1,int value2){
int sum=value1+value2;
return sum;
}
}
P.S. You could try to use the above samples directly. They will compile and run.
In most languages, you access the result of a function by putting the function call on the right hand side of an assignment expression.
For example, in Python, you can assign the result of calling the built-in len function on a list to a variable called x by doing the following:
x = len([1, 2, 3])

CUDA cudaMemcpy: invalid argument

Here is my code:
struct S {
int a, b;
float c, d;
};
class A {
private:
S* d;
S h[3];
public:
A() {
cutilSafeCall(cudaMalloc((void**)&d, sizeof(S)*3));
}
void Init();
};
void A::Init() {
for (int i=0;i<3;i++) {
h[i].a = 0;
h[i].b = 1;
h[i].c = 2;
h[i].d = 3;
}
cutilSafeCall(cudaMemcpy(d, h, 3*sizeof(S), cudaMemcpyHostToDevice));
}
A a;
In fact it is a complex program which contain CUDA and OpenGL. When I debug this program, it fails when running at cudaMemcpy with the error information
cudaSafeCall() Runtime API error 11: invalid argument.
Actually, this program is transformed from another one that can run correctly. But in that one, I used two variables S* d and S h[3] in the main function instead of in the class. What is more weird is that I implement this class A in a small program, it works fine.
And I've updated my driver, error still exists.
Could anyone give me a hint on why this happen and how to solve it. Thanks.
Because the memory operations in CUDA are blocking, they make a synchronization point. So other errors, if not checked with cudaThreadSynchonize, will seem like errors on the memory calls.
So if an error is received on a memory operation, try to place a cudaThreadSynchronize before it and check the result.
Be sure that the first malloc statement is being executed. If it is a problem about initialization of CUDA, like #Harrism indicate, then it would fail in this statement?? Try to place printf statements, and see proper initializations are performed. I think generally invalid argument errors are generated because of using uninitalized memory areas.
Write a printf to your constructor showing the address of the cudaMalloc'ed memory area
A()
{
d = NULL;
cutilSafeCall(cudaMalloc((void**)&d, sizeof(S)*3));
printf("D: %p\n", d);
}
Try to make a memory copy for an area that is locally allocated, namely move the cudaMalloc to above of cudaMemcopy (just for testing).
void A::Init()
{
for (int i=0;i<3;i++)
{
h[i].a = 0;
h[i].b = 1;
h[i].c = 2;
h[i].d = 3;
}
cutilSafeCall(cudaMalloc((void**)&d, sizeof(S)*3)); // here!..
cutilSafeCall(cudaMemcpy(d, h, 3*sizeof(S), cudaMemcpyHostToDevice));
}
Good luck.

Resources