java.lang.OutOfMemoryError: GC overhead limit exceeded when load ontology file from uri

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

java.lang.OutOfMemoryError: GC overhead limit exceeded when load ontology file from uri

IWilliams
When I load the ontology from file, like:
onto = get_ontology("file:path//to/file.owl").load()
with onto:
     onto.save(file=settings.ONTOLOGY_DIR + "ASE2020SecConWebOne" + projectName + "_" + projOwner + ".owl", format="rdfxml")

Next, I do:
owlready2.JAVA_EXE = settings.JAVA_DIR
world = World()
world.get_ontology("file://"+settings.ONTOLOGY_DIR + "ASE2020SecConWebOne" + projectName + "_" + projOwner + ".owl").load()
sync_reasoner_pellet(world,infer_property_values = True, infer_data_property_values = True)

Everything works great.

However, when I load the ontology like:
onto = get_ontology("http://ontologyfile.owl").load()
with onto:
     onto.save(file=settings.ONTOLOGY_DIR + "ASE2020SecConWebOne" + projectName + "_" + projOwner + ".owl", format="rdfxml")

Next, I do:
owlready2.JAVA_EXE = settings.JAVA_DIR
world = World()
world.get_ontology("file://"+settings.ONTOLOGY_DIR + "ASE2020SecConWebOne" + projectName + "_" + projOwner + ".owl").load()
sync_reasoner_pellet(world,infer_property_values = True, infer_data_property_values = True)

I get:

Java error message is:
log4j:WARN No appenders could be found for logger (com.hp.hpl.jena.sparql.mgt.ARQMgt).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.ArrayList.<init>(Unknown Source)
        at com.clarkparsia.pellet.rules.rete.Tuple.<init>(Tuple.java:47)
        at com.clarkparsia.pellet.rules.rete.Fact.<init>(Fact.java:38)
        at com.clarkparsia.pellet.rules.rete.BetaNode.join(BetaNode.java:129)
        at com.clarkparsia.pellet.rules.rete.Interpreter.processBetaNodes(Interpreter.java:109)
        at com.clarkparsia.pellet.rules.rete.Interpreter.run(Interpreter.java:236)
        at com.clarkparsia.pellet.rules.ContinuousRulesStrategy.applyRete(ContinuousRulesStrategy.java:179)
        at com.clarkparsia.pellet.rules.ContinuousRulesStrategy.complete(ContinuousRulesStrategy.java:322)
        at org.mindswap.pellet.ABox.isConsistent(ABox.java:1423)
        at org.mindswap.pellet.ABox.isConsistent(ABox.java:1260)
        at org.mindswap.pellet.KnowledgeBase.consistency(KnowledgeBase.java:1987)
        at org.mindswap.pellet.KnowledgeBase.isConsistent(KnowledgeBase.java:2061)
        at pellet.PelletRealize.run(PelletRealize.java:70)
        at pellet.Pellet.run(Pellet.java:105)
        at pellet.Pellet.main(Pellet.java:59)


When I check the files that were save when I loaded from file and load from uri, the only difference is the order of the SWRL atoms. So, if I have in the original owl file: UseCase(?uc) ^ UseCase(?uc2) ^ hasFlowGroup(?uc, ?g) ^ hasFlowGroup(?uc2, ?g) ^ isPartOfGroup(?f, ?g) ^ isPartOfGroup(?f2, ?g) ^ Search(?s1) ^ hasAction(?f, ?s1) ^ Search(?s2) ^ hasAction(?f2, ?s2) ^ actorPartOf(?a, ?uc) ^ actorPartOf(?a2, ?uc2) ^ sameAs(?a2, ?a) ^ differentFrom(?uc, ?uc2) ^ Aggregation(?ag) -> raise(?a, ?ag) and the newly save owl file for the the load from url would be: differentFrom(?uc, ?uc2) ^ actorPartOf(?a2, ?uc2) ^ actorPartOf(?a, ?uc) ^ hasFlowGroup(?uc, ?g) ^ Search(?s2) ^ Aggregation(?ag) ^ UseCase(?uc) ^ isPartOfGroup(?f2, ?g) ^ Search(?s1) ^ hasFlowGroup(?uc2, ?g) ^ hasAction(?f, ?s1) ^ hasAction(?f2, ?s2) ^ isPartOfGroup(?f, ?g) ^ sameAs(?a2, ?a) ^ UseCase(?uc2) -> raise(?a, ?ag)

As is there a way to keep the swrl in the order of the original owl file. I think because I have several swrl rules that are being reorder, it is causing the OutOfMemoryError.
Reply | Threaded
Open this post in threaded view
|

Re: java.lang.OutOfMemoryError: GC overhead limit exceeded when load ontology file from uri

Jiba
Administrator
Hi,

Most ontology tools do not preserve atom order, including Protégé.


Owlready write them in quadstore order, which should be directly dependent of the reading order -- also Sqlite3 do not explicitly garantee that.


You may also try to increase Java memory (using the latest version of Owlready) with:

owlready2.reasoning.JAVA_MEMORY = 4000 # Default is 2000

Jiba