StepParent¶
- class sofia_redux.instruments.hawc.stepparent.StepParent[source]¶
Bases:
object
Pipeline step parent class.
This class defines a pipeline step. Pipeline steps are the modules responsible for all data reduction tasks. Input and output data are passed as pipeline data objects (
DataFits
). This class expects that data is passed as a single data object, and the output is also a single data object (single-in, single-out (SISO) mode).All pipeline steps inheriting from this class must define a
setup
function that initializes data reduction parameters and metadata, and arun
function that performs the data reduction.This class is callable: given a data object input and keyword arguments corresponding to the step parameters, it calls the run function and returns a data object as output.
Methods Summary
__call__
(datain, **kwargs)Run the pipeline step.
getarg
(parname)Return the value of a parameter.
run
()Run the data reduction algorithm.
runend
(data)Clean up after a pipeline step.
runstart
(data, arglist)Initialize the pipeline step.
setup
()Set parameters and metadata for the pipeline step.
updateheader
(data)Update the header for a data object.
Methods Documentation
- __call__(datain, **kwargs)[source]¶
Run the pipeline step.
- Parameters:
- datainDataFits or DataText
Input data.
- **kwargs
Parameter name, value pairs to pass to the pipeline step.
- Returns:
- DataFits or DataText
- getarg(parname)[source]¶
Return the value of a parameter.
The parameter is first searched for in self.arglist[‘parname’], then in config[‘stepname’][‘parname’]. If the parameter is not found, the default value from parameter list is returned. Should the parameter name not have an entry in the parameter list, a error is returned and a KeyError is raised.
All name comparisons are made in lower case.
- Parameters:
- parnamestr
The parameter name.
- Returns:
- bool, int, float, str, or list
The parameter value.
- Raises:
- KeyError
If the parameter name is not found.
- run()[source]¶
Run the data reduction algorithm.
Input is read from self.datain. The result is set in self.dataout.
- runend(data)[source]¶
Clean up after a pipeline step.
This method should be called after calling self.run.
Sends a final log message, updates the header in the output data, and clears input parameter arguments.
- Parameters:
- dataDataFits or DataText
Output data to update.
- runstart(data, arglist)[source]¶
Initialize the pipeline step.
This method should be called after setting self.datain, and before calling self.run.
Sends an initial log message, checks the validity of the input data, and gets the configuration from input data.
- Parameters:
- dataDataFits or DataText
Input data to validate.
- arglistdict
Parameters to pass to the step.
- setup()[source]¶
Set parameters and metadata for the pipeline step.
This function is called at the end of __init__ to establish parameters and metadata specific to a pipeline step.
The name of the step and a short description should be set, as well as a three-letter abbreviation for the step. The first two values are used for header history and pipeline display; the abbreviation is used in the output filenames.
Parameters are stored in a list, where each element is a list containing the following information:
name: The name for the parameter. This name is used when calling the pipe step from a python shell. It is also used to identify the parameter in the pipeline configuration file.
default: A default value for the parameter. If nothing, set ‘’ for strings, 0 for integers and 0.0 for floats.
help: A short description of the parameter.
- updateheader(data)[source]¶
Update the header for a data object.
This function:
Updates the filename with the self.procname value
Sets the PROCSTAT and PRODTYPE keywords in the data header
Adds a history entry to the data header
Data is modified in place.
- Parameters:
- dataDataFits or DataText
Output data to update.