Pipeline

class sofia_redux.scan.pipeline.pipeline.Pipeline(reduction)[source]

Bases: ABC

Initialize a reduction pipeline.

The reduction pipeline is responsible for actually performing the reduction tasks at each iteration. This generally involves performing the tasks on all integrations in all scans, and updating the iteration source model.

Parameters:
reductionsofia_redux.scan.reduction.reduction.Reduction

Attributes Summary

available_jobs

Return the maximum number of jobs that may be performed in parallel.

configuration

Return the reduction configuration.

parallel_scans

Return the maximum number of parallel scan operations.

parallel_tasks

Return the maximum number of parallel tasks (in-scan) operations.

pipeline_id

Return a unique identifier for the pipeline.

Methods Summary

add_scan(scan)

Add a scan to the pipeline for reduction.

do_process(args, block)

Multiprocessing safe implementation for source processing of scans.

iterate()

Perform an iteration.

perform_tasks_for_scans(args, block)

Perform a single iteration of all tasks for all scans for the pipeline.

set_ordering(ordering)

Set the task ordering for the pipeline.

set_source_model(source)

Set the source model for the pipeline.

update_source(scan)

Update the reduction source model with a scan.

update_source_parallel_scans()

Update the source in parallel.

update_source_serial_scans()

Update the source using serial processing.

Attributes Documentation

available_jobs

Return the maximum number of jobs that may be performed in parallel.

Returns:
jobsint
configuration

Return the reduction configuration.

Returns:
Configuration
parallel_scans

Return the maximum number of parallel scan operations.

Returns:
jobsint
parallel_tasks

Return the maximum number of parallel tasks (in-scan) operations.

Returns:
jobsint
pipeline_id

Return a unique identifier for the pipeline.

Returns:
str

Methods Documentation

add_scan(scan)[source]

Add a scan to the pipeline for reduction.

Parameters:
scanScan
Returns:
None
classmethod do_process(args, block)[source]

Multiprocessing safe implementation for source processing of scans.

Parameters:
args4-tuple

args[0] = scans (list (Scan)) args[1] = temporary directory name (str) args[2] = number of parallel jobs (int) args[3] = Whether to clear certain data from the scan (bool)

blockint

The index of the scan to process.

Returns:
scan_pickle_filestr

The filename pointing to the processed scan saved as a pickle file.

iterate()[source]

Perform an iteration.

Returns:
None
classmethod perform_tasks_for_scans(args, block)[source]

Perform a single iteration of all tasks for all scans for the pipeline.

Returns:
None
set_ordering(ordering)[source]

Set the task ordering for the pipeline.

Parameters:
orderinglist (str)

A list of tasks to perform.

Returns:
None
set_source_model(source)[source]

Set the source model for the pipeline.

Parameters:
sourceSource or None
Returns:
None
update_source(scan)[source]

Update the reduction source model with a scan.

Parameters:
scanScan
Returns:
None
update_source_parallel_scans()[source]

Update the source in parallel.

Returns:
None
update_source_serial_scans()[source]

Update the source using serial processing.

Returns:
None