Observation2D

class sofia_redux.scan.source_models.maps.observation_2d.Observation2D(data=None, blanking_value=nan, dtype=<class 'float'>, shape=None, unit=None, weight_dtype=<class 'float'>, weight_blanking_value=None)[source]

Bases: Map2D

Initialize an Observation2D object.

The 2-D observation is an extension of the Map2D class that includes weights and exposure times in addition to the observation data values.

Parameters:
datanumpy.ndarray, optional

Data to initialize the flagged array with. If supplied, sets the shape of the array. Note that the data type will be set to that defined by the dtype parameter.

blanking_valueint or float, optional

The blanking value defines invalid values in the data array. This is the equivalent of defining a NaN value.

dtypetype, optional

The data type of the data array.

shapetuple (int), optional

The shape of the data array. This will only be relevant if data is not defined.

unitstr or units.Unit or units.Quantity, optional

The data unit.

weight_dtypetype, optional

Similar the dtype, except defines the data type for the observation weights and exposure times.

weight_blanking_valueint or float, optional

The blanking value for the weight and exposure maps. If None, will be set to np.nan if weight_dtype is a float, and 0 if weight_dtype is an integer.

Attributes Summary

valid

Return a boolean mask array of valid data elements.

Methods Summary

accumulate(image[, weight, gain, valid])

Add an observation image.

accumulate_at(image, gains, weights, times)

Accumulate at given indices.

clear([indices])

Clear flags and set data to zero.

copy([with_contents])

Return a copy of the map.

copy_processing_from(other)

Copy the processing from another 2-D observation.

crop(ranges)

Crop the image data.

destroy()

Destroy the image data.

discard([indices])

Set the flags for discarded indices to DISCARD and data to zero.

end_accumulation()

End the accumulation process by dividing the data values by the weight.

exposure_values()

Return the array of weights.

fast_smooth(beam_map, steps[, ...])

Smooth the data with a given beam map kernel using fast method.

fft_filter_above(fwhm[, valid, weight])

Apply FFT filtering above a given FWHM.

filter_correct(underlying_fwhm[, reference, ...])

Apply filter correction.

get_chi2([robust])

Return the Chi-squared statistic.

get_exposure_image()

Return the exposure image.

get_exposures()

Return an exposure overlay.

get_hdus()

Return the FITS HDUs for the observation.

get_info()

Get a list of info strings for the observation.

get_noise()

Return a noise overlay.

get_significance()

Return a significance overlay.

get_table_entry(name)

Return a parameter value for a given name.

get_weight_image()

Return the weights image.

get_weights()

Return the weights overlay.

index_of_max([sign, data])

Return the maximum value and index of maximum value.

mean([weights])

Return the weighted mean.

median([weights])

Return the weighted median.

mem_correct_observation(model, lg_multiplier)

Apply a maximum entropy correction given a model.

merge_accumulate(image)

Merge and accumulate an image onto this one.

noise_values()

Return the array of weights.

resample_from_map(obs2d[, weights])

Resample from one map to another.

reset_processing()

Reset the processing status.

reweight([robust])

Re-weight the observation

scale(factor[, indices])

Scale the data values and weights by a given factor.

set_data_shape(shape)

Set the shape of the data, weight, and exposure images.

set_exposure_image(exposure_image)

Set the weight image.

set_noise(noise_image)

Set the noise image.

set_significance(significance_image)

Set the significance image.

set_weight_image(weight_image)

Set the weight image.

significance_values()

Return the array of significance (S2N).

smooth(beam_map[, reference_index, weights])

Smooth the data with a given beam map kernel.

to_weight_image(data)

Convert data to a weight image.

undo_filter_correct([reference, valid])

Undo the last filter correction.

unscale_weights()

Undo the weight rescaling.

weight_values()

Return the array of weights.

Attributes Documentation

valid

Return a boolean mask array of valid data elements.

Valid elements are neither NaN, set to the blanking value, or flagged as the validating_flags.

Returns:
numpy.ndarray (bool)

A boolean mask where True indicates a valid element.

Methods Documentation

accumulate(image, weight=1.0, gain=1.0, valid=None)[source]

Add an observation image.

This is meant to be performed when the observation data array contains the product of the actual data with the weights. This is later reset to the standard data, weight format by performing Observation2D.end_accumulation().

Parameters:
imageObservation2D

The observation to add.

weightfloat, optional

A global weighting factor for the entire image. Typically the scan weight from which the image was derived.

gainfloat, optional

A gain factor that will be applied to the image values. It will be applied to the weighting factors as g^2 during accumulation.

validnumpy.ndarray (bool), optional

An array where False excludes a datum from accumulation.

Returns:
None
accumulate_at(image, gains, weights, times, indices=None)[source]

Accumulate at given indices.

The data are accumulated as: image * gains * weights The weights are accumulated as: weights * gains^2 The exposures are accumulated as: times

Parameters:
imageFlaggedArray or numpy.ndarray or float
gainsFlaggedArray or numpy.ndarray or float
weightsFlaggedArray or numpy.ndarray or float
timesFlaggedArray or numpy.ndarray or float
indicesnumpy.ndarray (bool or int), optional

A boolean mask adds to those indices on self.data marked as True. If so, image/weights/times etc should be the same shape as self.data of scalar values.

Returns:
None
clear(indices=None)[source]

Clear flags and set data to zero. Clear history.

Parameters:
indicestuple (numpy.ndarray (int)) or numpy.ndarray (bool), optional

The indices to discard. Either supplied as a boolean mask of shape (self.data.shape).

Returns:
None
copy(with_contents=True)[source]

Return a copy of the map.

Returns:
Observation2D
copy_processing_from(other)[source]

Copy the processing from another 2-D observation.

Parameters:
otherObservation2D
Returns:
None
crop(ranges)[source]

Crop the image data.

Parameters:
rangesnumpy.ndarray (int,) or units.Quantity (numpy.ndarray)

The ranges to set crop the data to. Should be of shape (n_dimensions, 2) where ranges[0, 0] would give the minimum crop limit for the first dimension and ranges[0, 1] would give the maximum crop limit for the first dimension. In this case, the ‘first’ dimension is in FITS format. i.e., (x, y) for a 2-D image. If a Quantity is supplied this should contain the min and max grid values to clip to in each dimension.

Returns:
None
destroy()[source]

Destroy the image data.

Returns:
None
discard(indices=None)[source]

Set the flags for discarded indices to DISCARD and data to zero.

Parameters:
indicestuple (numpy.ndarray (int)) or numpy.ndarray (bool), optional

The indices to discard. Either supplied as a boolean mask of shape (self.data.shape).

Returns:
None
end_accumulation()[source]

End the accumulation process by dividing the data values by the weight.

Zero-valued weights are ignored.

Returns:
None
exposure_values()[source]

Return the array of weights.

Returns:
numpy.ndarray (float)
fast_smooth(beam_map, steps, reference_index=None, weights=None)[source]

Smooth the data with a given beam map kernel using fast method.

Parameters:
beam_mapnumpy.ndarray (float)
stepsnumpy.ndarray (int)

The kernel steps in each dimension.

reference_indexnumpy.ndarray (float), optional

The reference index (center) of the beam_map kernel. By default this will be set to (beam_map.shape - 1)[::-1] / 2. Note that the reference index should by supplied in (x, y) order for FITS.

weightsnumpy.ndarray (float), optional

If not supplied, defaults to the observation weights.

Returns:
None

Notes

This isn’t fast compared to standard smooth as it requires an additional spline interpolation step.

fft_filter_above(fwhm, valid=None, weight=None)[source]

Apply FFT filtering above a given FWHM.

Parameters:
fwhmastropy.units.Quantity
validnumpy.ndarray (bool), optional
weightFlaggedArray or numpy.ndarray (float)
Returns:
None
filter_correct(underlying_fwhm, reference=None, valid=None)[source]

Apply filter correction.

Parameters:
underlying_fwhmastropy.units.Quantity
referenceFlaggedArray or numpy.ndarray, optional
validnumpy.ndarray (bool), optional
Returns:
None
get_chi2(robust=True)[source]

Return the Chi-squared statistic.

Parameters:
robustbool, optional

If True, use the ‘robust’ (median) method.

Returns:
float
get_exposure_image()[source]

Return the exposure image.

Returns:
Image2D
get_exposures()[source]

Return an exposure overlay.

Returns:
ExposureMap
get_hdus()[source]

Return the FITS HDUs for the observation.

Returns:
hdus: list (astropy.io.fits.hdu.base.ExtensionHDU)
get_info()[source]

Get a list of info strings for the observation.

Returns:
list of str
get_noise()[source]

Return a noise overlay.

Returns:
NoiseMap
get_significance()[source]

Return a significance overlay.

Returns:
SignificanceMap
get_table_entry(name)[source]

Return a parameter value for a given name.

Parameters:
namestr

The name of the entry to retrieve.

Returns:
value
get_weight_image()[source]

Return the weights image.

Returns:
Image2D
get_weights()[source]

Return the weights overlay.

Returns:
WeightMap
index_of_max(sign=1, data=None)[source]

Return the maximum value and index of maximum value.

Parameters:
signint or float, optional

If positive, find the maximum value in the array. If negative, find the minimum value in the array. If zero, find the maximum magnitude in the array.

datanumpy.ndarray (float), optional

The data array to examine. Default is the significance values.

Returns:
maximum_value, maximum_indexfloat, int
mean(weights=None)[source]

Return the weighted mean.

Parameters:
weightsnumpy.ndarray (float), optional

An array of weights.

Returns:
mean, weightfloat, float
median(weights=None)[source]

Return the weighted median.

weightsnumpy.ndarray (float), optional

An array of weights.

Returns:
median, weightfloat, float
mem_correct_observation(model, lg_multiplier)[source]

Apply a maximum entropy correction given a model.

Parameters:
modelnumpy.ndarray or FlaggedArray or None

The model from which to base MEM correction. Should be of shape (self.shape).

lg_multiplierfloat

The Lagrange multiplier (lambda) for the MEM correction.

Returns:
None
merge_accumulate(image)[source]

Merge and accumulate an image onto this one.

Parameters:
imageObservation2D
Returns:
None
noise_values()[source]

Return the array of weights.

Returns:
numpy.ndarray (float)
resample_from_map(obs2d, weights=None)[source]

Resample from one map to another.

Parameters:
obs2dObservation2D

The map to resample from.

weightsnumpy.ndarray (float), optional

Optional weights to use during the resampling. If not supplied, defaults to the weights of the supplied observation.

Returns:
None
reset_processing()[source]

Reset the processing status.

Returns:
None
reweight(robust=True)[source]

Re-weight the observation

Parameters:
robustbool, optional

If True, use the ‘robust’ (median) method to determine the chi2 statistic.

Returns:
None
scale(factor, indices=None)[source]

Scale the data values and weights by a given factor.

Parameters:
factorfloat
indicesnumpy.ndarray (bool), optional
Returns:
None
set_data_shape(shape)[source]

Set the shape of the data, weight, and exposure images.

Parameters:
shapetuple (int)
Returns:
None
set_exposure_image(exposure_image)[source]

Set the weight image.

Parameters:
exposure_imageImage2D or numpy.ndarray or None
Returns:
None
set_noise(noise_image)[source]

Set the noise image.

Parameters:
noise_imageImage2D or Overlay or numpy.ndarray or None
Returns:
None
set_significance(significance_image)[source]

Set the significance image.

Parameters:
significance_imageImage2D or Overlay or numpy.ndarray or None
Returns:
None
set_weight_image(weight_image)[source]

Set the weight image.

Parameters:
weight_imageImage2D or numpy.ndarray or None
Returns:
None
significance_values()[source]

Return the array of significance (S2N).

Returns:
numpy.ndarray (float)
smooth(beam_map, reference_index=None, weights=None)[source]

Smooth the data with a given beam map kernel.

Parameters:
beam_mapnumpy.ndarray (float)
reference_indexnumpy.ndarray (float), optional

The reference index (center) of the beam_map kernel. By default this will be set to (beam_map.shape - 1)[::-1] / 2. Note that the reference index should by supplied in (x, y) order for FITS.

weightsnumpy.ndarray (float), optional

If not supplied, defaults to the observation weights.

Returns:
None
to_weight_image(data)[source]

Convert data to a weight image.

Parameters:
dataFlaggedArray or FitsData or numpy.ndarray or None
Returns:
Image2D
undo_filter_correct(reference=None, valid=None)[source]

Undo the last filter correction.

Parameters:
referenceFlaggedArray or numpy.ndarray (float), optional

The data set to determine valid data within the blanking range. Defaults to self.data.

validnumpy.ndarray (bool), optional

True indicates a data element that may have the filter correction factor un-applied.

Returns:
None
unscale_weights()[source]

Undo the weight rescaling.

Returns:
None
weight_values()[source]

Return the array of weights.

Returns:
numpy.ndarray (float)