I wrote it just to help find root cause if it is really issue. I did not verify this for current patched 5.2 - but I remember that one of changes between 4.4 and 5+ was refactoring of logging - and there was some issues with it, and might be some of them was not 100% clearly fixed. But if LoggingRegistry is still growing between multiple job/transformation runs without restarting java process that could be the cause of OutOfMemory. I am sure it is may already being fixed and not reproduced. This will list of results with different log settings and different memory settings. Take a look into commons_results.xlsx inside this archive. When Java starts running low on memory, there is a background task, called the garbage collector, which identifies which of those objects are no longer needed, and frees up that memory. There is also a testing_data.zip with results of different run of logging1.kjb unlimited times on some machine before it will not fall into OOM. The first google hit for that phrase seems to be pretty informative - Essentially, your Java server process has a certain amount of memory which it can work with - every time you create an object, by calling new. I would say written below is just my personal view.īut if you take a look at it in jira - there is attached logging1.ktr and logging1.kjb - to validate some memory consumption on a running PDI server with different log settings. total mine project overhead costs including, the project supervision costs. When launching a job in PDI 5.2 CE, I am getting GC overhead limit exceeded.ĩ 11:14:37 - main_job - ERROR (version 5.2.0.0, build 1 from _19-48-28 by buildguy) : : GC overhead limit exceededĩ 11:14:37 - main_job - at (Locale.java:1962)ĩ 11:14:37 - main_job - at .clone(ValueMetaBase.java:276)ĩ 11:14:37 - main_job - at .clone(ValueMetaBase.java:80)ĩ 11:14:37 - main_job - at .cloneValueMeta(ValueMetaFactory.java:73)ĩ 11:14:37 - main_job - at .cloneValueMeta(ValueMetaFactory.java:65)ĩ 11:14:37 - main_job - at .(RowMeta.java:67)ĩ 10:16:10 - main_job - ERROR (version 5.2.0.0, build 1 from _19-48-28 by buildguy) : : Java heap spaceĩ 10:16:10 - main_job - at .clone(ValueMetaBase.java:272)ĩ 10:16:10 - main_job - at .clone(ValueMetaBase.java:80)ĩ 10:16:10 - main_job - at .cloneValueMeta(ValueMetaFactory.java:73)ĩ 10:16:10 - main_job - at .cloneValueMeta(ValueMetaFactory.java:65) fast and no limit of calculation in a long period of mining project, and. ![]() The changes will take effect after Eclipse is restarted. You can start by doubling the amounts as in the below example and then go from there, such as in the below example: -Xms512m -Xmx2048m. I have tried several GC cleaning algorithms with no success There's no magic number for the memory allocation as it depends on how big your project is and how much memory your system has. ![]() ![]() Place where error is launched is random but most surprising is even in a job step (Delete a file.) throws this error deleting a simple log file. I am using PDI CE 5.2 version with settings (Xmx 8gb and fails both in Java 1.7 and Java 1.8).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |