Re: Problem using ontology in multi-process environment

Posted by fabad on
URL: http://owlready.306.s1.nabble.com/Problem-using-ontology-in-multi-process-environment-tp3198p3224.html

Hi again and sorry for the delay. I've been testing the code and the memory is not being shared. In the following code, one process is creating a new class, and a second one is trying to access it after several seconds. The second process cannot see the new class created by the first thread:

import pathlib
import ctypes
import types
from concurrent.futures import ProcessPoolExecutor

from owlready2 import get_ontology, Thing
import time



def get_iri(ontology_address, job_id):
    ontology = ctypes.cast(ontology_address, ctypes.py_object).value
    base_iri = ontology.base_iri
    new_class_name = "NewClass"
    new_class_iri = base_iri + new_class_name
    if job_id == 0:
        print(f"Thread {job_id} creating new class.")
        with ontology:
            NewClass = types.new_class(new_class_name, (Thing,))
        if NewClass in list(ontology.search(iri=new_class_iri)):
            print(f"Thread {job_id} has created {new_class_iri}")

    if job_id == 1:
        seconds_to_wait = 10
        print(f"Thread {job_id} waiting {seconds_to_wait} seconds")
        time.sleep(seconds_to_wait)
        if len(list(ontology.search(iri=new_class_iri))) > 0:
            print(f"Thread {job_id} can see {new_class_iri}")
        else:
            print(f"Thread {job_id} cannot see {new_class_iri}")
    return f"job {job_id} -> {ontology.name}"

if __name__ == '__main__':
    ontology_file_path = pathlib.Path("/home/fabad/test_embed_comp/go.owl")
    print("Loading ontology")
    ontology = get_ontology(f"file://{str(ontology_file_path)}").load()
    ontology_address = id(ontology)
    print("Ontology loaded")
    executor = ProcessPoolExecutor(max_workers=4)
    results = {}
    for i in range(2):
        results[i] = executor.submit(get_iri, ontology_address, i)

    executor.shutdown(wait=True)

    for n, future in results.items():
        print(f'{future.result()}')


Then, I cannot explain why the memory usage is low when I used this approach.