Reduction

class sofia_redux.scan.reduction.reduction.Reduction(instrument, configuration_file='default.cfg', configuration_path=None)[source]

Bases: ReductionVersion

Initialize the reduction object.

Parameters:
instrumentstr or None

The name of the instrument for which the reduction applies.

configuration_filestr, optional

An optional configuration file to use.

configuration_pathstr, optional

An alternate directory path to the configuration tree to be used during the reduction. The default is <package>/data/configurations.

Attributes Summary

configuration

Return the reduction configuration object.

info

Return the info object for the reduction.

instrument

Return the instrument name for the reduction

is_sub_reduction

Return whether this reduction is a sub-reduction of a parent reduction.

name

Return the name (typically instrument) of the reduction.

reduction_id

Return a unique identifier for this reduction.

rounds

Return the maximum number of rounds (iterations) in the reduction.

size

Return the number of scans in the reduction

total_reductions

Return the total number of reductions to be processed.

Methods Summary

add_user_configuration(**kwargs)

Add command line options to the reduction.

apply_options_to_scans(options)

Apply configuration options to all scans.

assign_parallel_jobs([reset])

Determine the parallel jobs for the reduction.

assign_reduction_files(filenames)

Assign reduction files to a reduction or sub-reductions.

assign_sub_reductions(sub_reductions)

Reassign sub-reductions to the parent (this) reduction.

blank_copy()

Return a base copy of the reduction.

edit_header(header)

Edit an image header with reduction information.

get_total_observing_time()

Return the total observing time for all scans in the reduction.

init_collective_source_model()

Create a source model for each sub reduction with the same WCS.

init_pipelines()

Initialize the reduction pipeline.

init_source_model()

Initialize the source model

is_valid()

Return whether the reduction contains any valid scans.

iterate([tasks])

Perform a single iteration.

iterate_pipeline(tasks)

Perform a single iteration of the pipeline.

iteration()

Return the current iteration.

parallel_safe_init_pipelines(sub_reductions, ...)

Multitask safe function to initialize sub-reduction pipelines.

parallel_safe_read_all_files(args, file_number)

Read a single file from a list of many and return a Scan.

parallel_safe_read_scan(args, file_number)

Read a single scan.

parallel_safe_read_sub_reduction_scans(args, ...)

Read all files in a single reduction to create Scan objects.

parallel_safe_reduce_sub_reduction(args, ...)

Reduce a single sub-reduction.

parallel_safe_validate_sub_reductions(...)

Multitask safe function to validate each sub-reduction.

pickle_sub_reductions()

Convert all sub-reductions to pickle files.

read_scan(filename[, read_fully])

Given a filename, read it and return a Scan instance.

read_scans([filenames])

Read a list of FITS files to create scans.

read_sub_reduction_scans()

Read scans for each sub-reduction.

reduce()

Perform the reduction.

reduce_sub_reductions()

Reduce all sub-reductions.

return_scan_from_read_arguments(filename, ...)

Create a scan given a FITS file and instrument channels.

run(filenames, **kwargs)

Run the initialized reduction on a set of files.

set_iteration(iteration[, rounds, for_scans])

Set the configuration for a given iteration

set_object_options(source_name)

Set the configuration options for an observing source.

set_observing_time_options()

Set the configuration options for the observing time.

set_outpath()

Set the output directory based on the configuration.

solve_source()

Return whether to solve for the source.

summarize()

Summarize (print logs) for the iteration.

summarize_integration(integration)

Summarize (print logs) for an integration.

terminate_reduction()

Safely stop all relevant processes following a full reduction.

unpickle_sub_reductions([delete])

Retrieve all sub-reductions from pickle files.

update_parallel_config([reset])

Update the maximum number of jobs to parallel process.

update_runtime_config([reset])

Update important configuration settings during prior to run.

validate()

Validate scans following a read.

validate_scans()

Remove any invalid scans from the reduction scans.

validate_sub_reductions()

Validate all sub reductions, then initialize models and pipelines.

write_products()

Write the products of the reduction to file.

Attributes Documentation

configuration

Return the reduction configuration object.

Returns:
Configuration
info

Return the info object for the reduction.

Returns:
Info
instrument

Return the instrument name for the reduction

Returns:
instrument_namestr
is_sub_reduction

Return whether this reduction is a sub-reduction of a parent reduction.

Returns:
bool
name

Return the name (typically instrument) of the reduction.

Returns:
namestr
reduction_id

Return a unique identifier for this reduction.

Returns:
idstr
rounds

Return the maximum number of rounds (iterations) in the reduction.

Returns:
roundsint
size

Return the number of scans in the reduction

Returns:
n_scansint
total_reductions

Return the total number of reductions to be processed.

This is of importance for polarimetry HAWC_PLUS reductions, where separate source maps are generated for each sub-reduction. Otherwise, it is expected for there to only be a single reduction.

Returns:
int

Methods Documentation

add_user_configuration(**kwargs)[source]

Add command line options to the reduction.

Parameters:
kwargsdict
Returns:
None
apply_options_to_scans(options)[source]

Apply configuration options to all scans.

Parameters:
optionsdict
Returns:
None
assign_parallel_jobs(reset=False)[source]

Determine the parallel jobs for the reduction.

Determines:

1 - The number of sub-reductions to read in parallel 2 - The number of scans to process in parallel 3 - The number of tasks (processes within scans) to run in parallel

Parameters:
resetbool, optional

If True, allow the parallel settings for sub-reductions to be updated.

Returns:
None
assign_reduction_files(filenames)[source]

Assign reduction files to a reduction or sub-reductions.

Parameters:
filenameslist (str) or list (list (str))

A list of files for a single reduction, or a list of files for each sub-reduction.

Returns:
None
assign_sub_reductions(sub_reductions)[source]

Reassign sub-reductions to the parent (this) reduction.

This should be performed following any parallel process in order to ensure that all sub-reductions reference the parent, and the parallel configuration is valid.

Parameters:
sub_reductionslist (Reduction)
Returns:
None
blank_copy()[source]

Return a base copy of the reduction.

There will be no scans, source or pipelines loaded. Only the basic channel information and info/configuration will be available.

Returns:
Reduction
edit_header(header)[source]

Edit an image header with reduction information.

Parameters:
headerfits.Header

The FITS header to edit.

Returns:
None
get_total_observing_time()[source]

Return the total observing time for all scans in the reduction.

Returns:
observing_time: astropy.units.Quantity

The total observing time.

init_collective_source_model()[source]

Create a source model for each sub reduction with the same WCS.

Returns:
None
init_pipelines()[source]

Initialize the reduction pipeline.

The parallel pipelines defines that maximum number of pipelines that should be created that may iterate in parallel. Parallel tasks are the number of cores left available to process in parallel by the pipeline.

Returns:
None
init_source_model()[source]

Initialize the source model

Returns:
None
is_valid()[source]

Return whether the reduction contains any valid scans.

Note that this also removes any invalid integrations from all scans.

Returns:
bool
iterate(tasks=None)[source]

Perform a single iteration.

Parameters:
taskslist (str)

A list of tasks to perform for the iteration.

Returns:
None
iterate_pipeline(tasks)[source]

Perform a single iteration of the pipeline.

Parameters:
taskslist (str)

A list of the pipeline tasks to run.

Returns:
None
iteration()[source]

Return the current iteration.

Returns:
iterationint
classmethod parallel_safe_init_pipelines(sub_reductions, index)[source]

Multitask safe function to initialize sub-reduction pipelines.

Parameters:
sub_reductionslist (Reduction)

All sub-reductions to initialize.

indexint

The index of the sub-reduction to initialize.

Returns:
None
classmethod parallel_safe_read_all_files(args, file_number)[source]

Read a single file from a list of many and return a Scan.

This function is safe for multitask().

Parameters:
args2-tuple
A tuple of arguments where:

args[0] - A list (list (list)) of all read arguments. args[1] - pickle directory (str) in which to pickle the scan.

file_numberint

The index of the file to read in all of the supplied filenames (args[0]).

Returns:
scanScan or str

A Scan object if no pickling is required, or a string filename pointing to the pickled scan object.

classmethod parallel_safe_read_scan(args, file_number)[source]

Read a single scan.

This function is safe for multitask().

Parameters:
args2-tuple
A tuple of arguments where:

args[0] - list (str) of all filenames args[1] - The reduction Channels object.

file_numberint

The index of the file to read in all of the supplied filenames (args[0]).

Returns:
scanScan or str

A Scan object if no pickling is required, or a string filename pointing to the pickled scan object.

classmethod parallel_safe_read_sub_reduction_scans(args, reduction_number)[source]

Read all files in a single reduction to create Scan objects.

Parameters:
args1-tuple

A single tuple where args[0] is a list (Reduction).

reduction_numberint

The reduction for which to read files out of all the supplied reductions (args[0]).

Returns:
reductionReduction

A reduction where the scans attribute has been populated with a list of read and validated Scan objects.

classmethod parallel_safe_reduce_sub_reduction(args, reduction_number)[source]

Reduce a single sub-reduction.

If the sub-reduction is a string and point to a file, it will be taken to be a cloudpickle file and restored. If the reduction was successful, this file will be deleted.

Parameters:
args1-tuple

A tuple containing all sub-reductions

reduction_numberint

The sub-reduction to reduce.

Returns:
sub_reductionReduction or str

A Reduction object if pickling is not enabled, or a path to the pickle file if it is.

classmethod parallel_safe_validate_sub_reductions(sub_reductions, index)[source]

Multitask safe function to validate each sub-reduction.

Parameters:
sub_reductionslist (Reduction)

All sub-reductions to validate.

indexint

The index of the sub-reduction to validate.

Returns:
None
pickle_sub_reductions()[source]

Convert all sub-reductions to pickle files.

All sub-reduction Reduction objects will be converted to pickle files, and the sub_reductions attribute will contain on-disk file locations for those files.

Returns:
None
read_scan(filename, read_fully=True)[source]

Given a filename, read it and return a Scan instance.

Scans are initialized based on the instrument name using the default configuration. The Configuration (owned by each scan) is then updated using information from the supplied file, and any necessary information will be extracted.

Parameters:
filenamestr

The path to a FITS file.

read_fullybool, optional

If False, do not fully read the scan (definition depends on the instrument).

Returns:
Scan
read_scans(filenames=None)[source]

Read a list of FITS files to create scans.

Parameters:
filenameslist (str) or list (list (str)), optional

A list of scan FITS file names. If not supplied, defaults to the files stored in the reduction_files attribute. If there are multiple sub-reductions, filenames must be a contain a list of filenames for each sub-reduction. i.e., filenames[0] contains the list of files to read for the first sub-reduction.

Returns:
None
read_sub_reduction_scans()[source]

Read scans for each sub-reduction.

Reduction files MUST have already been assigned to each sub-reduction.

Returns:
None
reduce()[source]

Perform the reduction.

Returns:
None
reduce_sub_reductions()[source]

Reduce all sub-reductions.

Returns:
None
classmethod return_scan_from_read_arguments(filename, channels)[source]

Create a scan given a FITS file and instrument channels.

Parameters:
filenamestr
channelssofia_redux.scan.channels.channels.Channels
Returns:
scansofia_redux.scan.scan.Scan
run(filenames, **kwargs)[source]

Run the initialized reduction on a set of files.

Note that all user configuration options must have been loaded at this stage.

Parameters:
filenamesstr or list (str)

The file or files to reduce.

kwargsdict, optional

Optional configuration options to pass into the reduction.

Returns:
hdulfits.HDUList

A list of HDU objects containing the reduced source map.

set_iteration(iteration, rounds=None, for_scans=True)[source]

Set the configuration for a given iteration

Parameters:
iterationfloat or int or str

The iteration to set. A positive integer defines the exact iteration number. A negative integer defines the iteration relative to the last (e.g. -1 is the last iteration). A float represents a fraction (0->1) of the number of rounds, and a string may be parsed in many ways such as last, first, a float, integer, or percentage (if suffixed with a %).

roundsint, optional

The maximum number of iterations.

for_scansbool, optional

If True, set the iteration for all scans as well.

Returns:
None
set_object_options(source_name)[source]

Set the configuration options for an observing source.

Parameters:
source_namestr

The source name.

Returns:
None
set_observing_time_options()[source]

Set the configuration options for the observing time.

Returns:
None
set_outpath()[source]

Set the output directory based on the configuration.

If the configuration path does not exist, it will be created if the ‘outpath.create’ option is set. Otherwise, an error will be raised.

Returns:
None
solve_source()[source]

Return whether to solve for the source.

Returns:
bool
summarize()[source]

Summarize (print logs) for the iteration.

Returns:
None
static summarize_integration(integration)[source]

Summarize (print logs) for an integration.

Parameters:
integrationIntegration
Returns:
None
terminate_reduction()[source]

Safely stop all relevant processes following a full reduction.

Returns:
None
unpickle_sub_reductions(delete=True)[source]

Retrieve all sub-reductions from pickle files.

All sub-reduction Reduction objects will be restored from pickle files whose filenames are present in the sub_reductions attribute.

Parameters:
deletebool, optional

If True, delete all pickle files and the pickle directory.

Returns:
None
update_parallel_config(reset=False)[source]

Update the maximum number of jobs to parallel process.

Parameters:
resetbool, optional

If True, re-assign the available parallel jobs to this reduction and all sub-reductions if necessary.

Returns:
None
update_runtime_config(reset=False)[source]

Update important configuration settings during prior to run.

The output path and parallel processing configuration will be determined during this stage.

Parameters:
resetbool, optional

If True, re-assign the available parallel jobs to this reduction and all sub-reductions if necessary.

Returns:
None
validate()[source]

Validate scans following a read.

Returns:
None
Raises:
ValueError

If there are no scans to reduce.

validate_scans()[source]

Remove any invalid scans from the reduction scans.

Returns:
None
validate_sub_reductions()[source]

Validate all sub reductions, then initialize models and pipelines.

Returns:
None
write_products()[source]

Write the products of the reduction to file.

Returns:
None