java - PyLucene error with IceTea / JDK / JRE -
i have followed installation instructionrs http://bendemott.blogspot.de/2013/11/installing-pylucene-4-451.html pylucene using latest pylucene-4.9.0.0
.
and when tried to lucene.initvm()
, following error:
alvas@ubi:~$ python python 2.7.6 (default, mar 22 2014, 22:59:56) [gcc 4.8.2] on linux2 type "help", "copyright", "credits" or "license" more information. >>> import lucene >>> lucene.initvm() # # fatal error has been detected java runtime environment: # # sigsegv (0xb) @ pc=0x00007ffba22808b8, pid=5189, tid=140718811092800 # # jre version: openjdk runtime environment (7.0_65-b32) (build 1.7.0_65-b32) # java vm: openjdk 64-bit server vm (24.65-b04 mixed mode linux-amd64 compressed oops) # derivative: icedtea 2.5.3 # distribution: ubuntu 14.04 lts, package 7u71-2.5.3-0ubuntu0.14.04.1 # problematic frame: # v [libjvm.so+0x6088b8] jni_registernatives+0x58 # # failed write core dump. core dumps have been disabled. enable core dumping, try "ulimit -c unlimited" before starting java again # # error report file more information saved as: # /home/alvas/hs_err_pid5189.log # # if submit bug report, please include # instructions on how reproduce bug , visit: # http://icedtea.classpath.org/bugzilla # aborted (core dumped)
and file http://pastebin.com/6b8fyc4z
is there wrong icetea configuration? or jdk? or jre?
how should resolve problem?
so took @ stack trace, , don't think issue pylucene. in stack trace, see error:
siginfo:si_signo=sigsegv: si_errno=0, si_code=1 (segv_maperr), si_addr=0x0000000000000000
if @ first part, sigsegv, means have segmentation fault somewhere in system. segv_maperr specific error, means openjdk trying map memory object , failed. could've been caused not enough memory, bad pagefile/virtual memory, bad address space, or bad library. why worked on machine anything. core dumps useful, if can run
ulimit -c unlimited
that give at. in vm or on physical machine? i've seen random sigsegv in ubuntu vms if don't have enough memory allocated various java tasks. saw on esxi hypervisors specifically, , noticed when esxi started perform memory swapping. able resolve increasing memory, rebooting vm, , making sure hypervisor wasn't swapping memory. let me know if helps. :)
edit: noticed if underlying storage provider had poor performance, impact swap data , feel impact sigsegv issues.
Comments
Post a Comment