mantidimaging.core.parallel.utility module

class mantidimaging.core.parallel.utility.SharedArray(array: np.ndarray, shared_memory: Optional[SharedMemory], free_mem_on_del: bool = True)[source]

Bases: object

property array_proxy: mantidimaging.core.parallel.utility.SharedArrayProxy
property has_shared_memory: bool
class mantidimaging.core.parallel.utility.SharedArrayProxy(mem_name: Optional[str], shape: Tuple[int, ...], dtype: npt.DTypeLike)[source]

Bases: object

property array: numpy.ndarray
mantidimaging.core.parallel.utility.calculate_chunksize(cores)[source]

TODO possible proper calculation of chunksize, although best performance has been with 1 From performance tests, the chunksize doesn’t seem to make much of a difference, but having larger chunks usually led to slower performance:

Shape: (50,512,512) 1 chunk 3.06s 2 chunks 3.05s 3 chunks 3.07s 4 chunks 3.06s 5 chunks 3.16s 6 chunks 3.06s 7 chunks 3.058s 8 chunks 3.25s 9 chunks 3.45s

mantidimaging.core.parallel.utility.copy_into_shared_memory(array: numpy.ndarray) mantidimaging.core.parallel.utility.SharedArray[source]
mantidimaging.core.parallel.utility.create_array(shape: Tuple[int, ...], dtype: npt.DTypeLike = <class 'numpy.float32'>) SharedArray[source]

Create an array in shared memory

Parameters
  • shape – Shape of the array

  • dtype – Dtype of the array

Returns

The created SharedArray

mantidimaging.core.parallel.utility.enough_memory(shape, dtype)[source]
mantidimaging.core.parallel.utility.execute_impl(img_num: int, partial_func: partial, is_shared_data: bool, progress: Progress, msg: str)[source]
mantidimaging.core.parallel.utility.multiprocessing_necessary(shape: int, is_shared_data: bool) bool[source]
mantidimaging.core.parallel.utility.run_compute_func_impl(worker_func: Callable[[int], None], num_operations: int, is_shared_data: bool, progress=None, msg: str = '')[source]