[ACCEPTED]-Speed tradeoff of Java's -Xms and -Xmx options-jvm-arguments
-Xmx argument defines the max memory size 21 that the heap can reach for the JVM. You 20 must know your program well and see how 19 it performs under load and set this parameter 18 accordingly. A low value can cause OutOfMemoryExceptions or a 17 very poor performance if your program's 16 heap memory is reaching the maximum heap 15 size. If your program is running in dedicated 14 server you can set this parameter higher 13 because it wont affect other programs.
-Xms argument sets the initial heap memory size 11 for the JVM. This means that when you start 10 your program the JVM will allocate this 9 amount of memory instantly. This is useful 8 if your program will consume a large amount 7 of heap memory right from the start. This 6 avoids the JVM to be constantly increasing 5 the heap and can gain some performance there. If 4 you don't know if this parameter is going 3 to help you, don't use it.
In summary, this is a compromise 2 that you have to decide based only in the 1 memory behavior of your program.
It depends on the GC your java is using. Parallel 7 GCs might work better on larger memory settings 6 - I'm no expert on that though.
In general, if 5 you have larger memory the less frequent 4 it needs to be GC-ed - there is lots of 3 room for garbage. However, when it comes 2 to a GC, the GC has to work on more memory 1 - which in turn might be slower.
I have found that in some cases too much 8 memory can slow the program down.
For example 7 I had a hibernate based transform engine 6 that started running slowly as the load 5 increased. It turned out that each time 4 we got an object from the db, hibernate 3 was checking memory for objects that would 2 never be used again.
The solution was to 1 evict the old objects from the session.
- Allocation always depends on your OS. If you allocate too much memory, you could end up having loaded portions into swap, which indeed is slow.
- Whether your program runs slower or faster depends on the references the VM has to handle and to clean. The GC doesn't have to sweep through the allocated memory to find abandoned objects. It knows it's objects and the amount of memory they allocate by reference mapping. So sweeping just depends on the size of your objects. If your program behaves the same in both cases, the only performance impact should be on VM startup, when the VM tries to allocate memory provided by your OS and if you use the swap (which again leads to 1.)
The speed tradeoffs between various settings 11 of -Xms and -Xmx depend on the application 10 and system that you run your Java application 9 on. It also depends on your JVM and other 8 garbage collection parameters you use.
This 7 question is 11 years old, and since then 6 the effects of the JVM parameters on performance 5 have become even harder to predict in advance. So 4 you can try different values and see the 3 effects on performance, or use a free tool 2 like Optimizer Studio that will find the optimal JVM parameter 1 values automatically.
It is difficult to say how the memory allocation 15 will affect your speed. It depends on the 14 garbage collection algorithm the JVM is 13 using. For example if your garbage collector 12 needs to pause to do a full collection, then 11 if you have 10 more memory than you really 10 need then the collector will have 10 more 9 garbage to clean up.
If you are using java 8 6 you can use the jconsole (in the bin directory 7 of the jdk) to attach to your process and 6 watch how the collector is behaving. In 5 general the collectors are very smart and 4 you won't need to do any tuning, but if 3 you have a need there are numerous options 2 you have use to further tune the collection 1 process.
> C:\java -X -Xmixed mixed mode execution (default) -Xint interpreted mode execution only -Xbootclasspath:<directories and zip/jar files separated by ;> set search path for bootstrap classes and resources -Xbootclasspath/a:<directories and zip/jar files separated by ;> append to end of bootstrap class path -Xbootclasspath/p:<directories and zip/jar files separated by ;> prepend in front of bootstrap class path -Xnoclassgc disable class garbage collection -Xincgc enable incremental garbage collection -Xloggc:<file> log GC status to a file with time stamps -Xbatch disable background compilation -Xms<size> set initial Java heap size -Xmx<size> set maximum Java heap size -Xss<size> set java thread stack size -Xprof output cpu profiling data -Xfuture enable strictest checks, anticipating future default -Xrs reduce use of OS signals by Java/VM (see documentation) -Xcheck:jni perform additional checks for JNI functions -Xshare:off do not attempt to use shared class data -Xshare:auto use shared class data if possible (default) -Xshare:on require using shared class data, otherwise fail.
-X options are non-standard and subject 1 to change without notice.
This was always the question I had when 31 I was working on one of my application which 30 created massive number of threads per request.
So 29 this is a really good question and there 28 are two aspects of this:
1. Whether my 27 Xms and Xmx value should be same
- Most 26 websites and even oracle docs suggest it 25 to be the same. However, I suggest to have 24 some 10-20% of buffer between those values 23 to give heap resizing an option to your 22 application in case sudden high traffic 21 spikes OR a incidental memory leak.
2. Whether 20 I should start my Application with lower 19 heap size
- So 18 here's the thing - no matter what GC Algo 17 you use (even G1), large heap always has 16 some trade off. The goal is to identify 15 the behavior of your application to what 14 heap size you can allow your GC pauses in 13 terms of latency and throughput.
- For 12 example, if your application has lot of 11 threads (each thread has 1 MB stack in native 10 memory and not in heap) but does not occupy 9 heavy object space, then I suggest have 8 a lower value of Xms.
- If 7 your application creates lot of objects 6 with increasing number of threads, then 5 identify to what value of Xms you can set 4 to tolerate those STW pauses. This means 3 identify the max response time of your incoming 2 requests you can tolerate and according 1 tune the minimum heap size.
More Related questions