Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial XPCS GUIPlugin #3

Merged
merged 50 commits into from
Aug 15, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
cea51d8
Initial layout for XPCSGUI
ihumphrey Jun 18, 2019
c97fc2b
Update requirments
ihumphrey Jun 18, 2019
c284366
Rudimentary XPCS Correlate
ihumphrey Jun 21, 2019
56b8b35
Update fork with current work
ihumphrey Jun 24, 2019
d8e7caf
Fix XPCSWorkflow names
ihumphrey Jun 25, 2019
0b0d90c
Update XPCS workflow to handle mutliple selections
ihumphrey Jun 25, 2019
872eaa5
Separate XPCS algorithm stages
ihumphrey Jun 29, 2019
31a5058
OneTime corr no longer passing data as ints
ihumphrey Jun 29, 2019
eadc93f
Initial correlation document class
ihumphrey Jul 2, 2019
e52d316
Updating XPCS gui
ihumphrey Jul 9, 2019
272a9af
Change onetime defaults to run time corr on all images
ihumphrey Jul 9, 2019
977c7e9
Configure XPCS one time to plot all values
ihumphrey Jul 9, 2019
127156e
Update XPCS to use generator for result docs
ihumphrey Jul 9, 2019
9aa2a34
Update XPCS to update results selection
ihumphrey Jul 10, 2019
16cf9c7
Refactor the way XPCS gets its stage's objects
ihumphrey Jul 10, 2019
a9b2f39
Update OneTime parameter tree
ihumphrey Jul 15, 2019
5017ea3
Initial model for relaxation rate
ihumphrey Jul 15, 2019
984b9f4
Update fitting names
ihumphrey Jul 15, 2019
0320816
Merge upstream master
ihumphrey Jul 15, 2019
dffc7a8
XPCSProcessors now cache workflows and retrieve params
ihumphrey Jul 18, 2019
2c7c7eb
Hide certain XPCS ProcessingPlugin inputs
ihumphrey Jul 18, 2019
4b68aac
Add FitScatteringFactor.fit_deriv(); hide some Inputs
ihumphrey Jul 18, 2019
d905909
Update OneTimeCorrelation to use ndarray types
ihumphrey Jul 18, 2019
9a07950
Remove temp file
ihumphrey Jul 18, 2019
ba7c297
Update XPCS GUI for easier multiplots
ihumphrey Jul 22, 2019
b267fde
Update XPCS to plot both internal and external processed data
ihumphrey Jul 24, 2019
254bc1b
Clean up APSXPCS handler
ihumphrey Jul 25, 2019
c7bea8b
Update XPCS to use checkable tree view for results
ihumphrey Jul 25, 2019
3901c0f
Update XPCS data keys
ihumphrey Jul 29, 2019
bdc9bf9
Plot legend updates w/ plot items
ihumphrey Jul 30, 2019
922b832
Cleanup XPCS __init__ and views
ihumphrey Aug 1, 2019
44b5f8b
Add run icon to XPCS process btn
ihumphrey Aug 6, 2019
f2eb71d
remove unused createFigure method
ihumphrey Aug 6, 2019
8dc4140
Add fit-curve to XPCS
ihumphrey Aug 7, 2019
ebb2c76
Add g2 fit to APS XPCS handler
ihumphrey Aug 12, 2019
49fba73
Remove unused CorrelationDocument file
ihumphrey Aug 13, 2019
1fc4174
Cleanup fitting.py
ihumphrey Aug 13, 2019
e5054d3
Cleanup onetime.py
ihumphrey Aug 13, 2019
52fc657
Sort onetime imports
ihumphrey Aug 13, 2019
5bf6eec
Remove unused InOut import in onetime
ihumphrey Aug 13, 2019
4eb727c
Cleanup views.py
ihumphrey Aug 13, 2019
b42ed68
Cleanup XPCS workflows
ihumphrey Aug 14, 2019
7106cce
Warn if multiple imgs selected in APSXPCS
ihumphrey Aug 14, 2019
2d4f5b1
Sort APSXPCS imports
ihumphrey Aug 14, 2019
6120c97
Put fitting InputOutput params first
ihumphrey Aug 14, 2019
500a465
Sort XPCS imports
ihumphrey Aug 14, 2019
56fdcaa
Modify check for loading processed data
ihumphrey Aug 14, 2019
2754fad
Add scikit-beam PEP 508 URL to reqs
ihumphrey Aug 14, 2019
a829fdb
Rename var data_key
ronpandolfi Aug 14, 2019
d59255d
Remove old todo
ronpandolfi Aug 14, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,12 @@ sphinx_rtd_theme
nose
coverage
pypi-publisher
scipy
scikit-image
xicam.SAXS
event-model
intake_bluesky

# Wait until ronpandolfi's changes (#542) are in release /PyPi ...
scikit-beam @ git+https://github.com/scikit-beam/scikit-beam

6 changes: 3 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@
with open(path.join(here, 'requirements.txt'), encoding='utf-8') as f:
all_reqs = f.read().split('\n')

install_requires = [x.strip() for x in all_reqs if 'git+' not in x]
dependency_links = [x.strip().replace('git+', '') for x in all_reqs if x.startswith('git+')]
install_requires = [x.strip() for x in all_reqs]# if 'git+' not in x]
#dependency_links = [x.strip().replace('git+', '') for x in all_reqs if x.startswith('git+')]

setup(
name='xicam.XPCS',
Expand All @@ -37,6 +37,6 @@
include_package_data=True,
author='Ron Pandolfi',
install_requires=install_requires,
dependency_links=dependency_links,
#dependency_links=dependency_links,
author_email='ronpandolfi@lbl.gov'
)
372 changes: 320 additions & 52 deletions xicam/XPCS/__init__.py

Large diffs are not rendered by default.

126 changes: 126 additions & 0 deletions xicam/XPCS/formats/APSXPCS.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
import time
from pathlib import Path

import event_model
import h5py
from intake_bluesky.in_memory import BlueskyInMemoryCatalog

from xicam.core.msg import WARNING, notifyMessage
from xicam.plugins.DataHandlerPlugin import DataHandlerPlugin


class APSXPCS(DataHandlerPlugin):
"""
Handles ingestion of APS XPCS .hdf files.

Internally, these .hdf files will hold a reference to a .bin image file,
which will be loaded as well.
"""
name = 'APSXPCS'
DEFAULT_EXTENTIONS = ['.hdf', '.h5']

def __init__(self, path):
super(APSXPCS, self).__init__()
self.path = path

def __call__(self, data_key='', slc=0, **kwargs):
# TODO -- change to show image data once image data is being ingested
h5 = h5py.File(self.path, 'r')
if data_key == 'norm-0-g2':
return h5['exchange'][data_key][:,:,slc].transpose()
return h5['exchange'][data_key][slc]

@classmethod
def ingest(cls, paths):
updated_doc = dict()
# TODO -- update for multiple paths (pending dbheader interface)
if len(paths) > 1:
paths = [paths[0]]
message = 'Opening multiple already-processed data sources is not yet supported. '
message += f'Opening the first image, {paths[0]}...'
notifyMessage(message, level=WARNING)
print(f'PATHS: {paths}')
for name, doc in cls._createDocument(paths):
if name == 'start':
updated_doc[name] = doc
# TODO -- should 'sample_name' and 'paths' be something different?
doc['sample_name'] = cls.title(paths)
doc['paths'] = paths
if name == 'descriptor':
if updated_doc.get('descriptors'):
updated_doc['descriptors'].append(doc)
else:
updated_doc['descriptors'] = [doc]
if name == 'event':
if updated_doc.get('events'):
updated_doc['events'].append(doc)
else:
updated_doc['events'] = [doc]
if name == 'stop':
updated_doc[name] = doc

return updated_doc

@classmethod
def title(cls, paths):
"""Returns the title of the start_doc sample_name"""
# return the file basename w/out extension
# TODO -- handle multiple paths
return Path(paths[0]).resolve().stem

@classmethod
def _createDocument(cls, paths):
# TODO -- add frames after being able to read in bin images
for path in paths:
timestamp = time.time()

run_bundle = event_model.compose_run()
yield 'start', run_bundle.start_doc

source = 'APS XPCS' # TODO -- find embedded source info?
frame_data_keys = {'frame': {'source': source, 'dtype': 'number', 'shape': []}}
frame_stream_name = 'primary'
frame_stream_bundle = run_bundle.compose_descriptor(data_keys=frame_data_keys,
name=frame_stream_name)
yield 'descriptor', frame_stream_bundle.descriptor_doc

# 'name' is used as an identifier for results when plotting
reduced_data_keys = {
'g2': {'source': source, 'dtype': 'number', 'shape': [61]},
'g2_err': {'source': source, 'dtype': 'number', 'shape': [61]},
'lag_steps': {'source': source, 'dtype': 'number', 'shape': [61]},
'fit_curve': {'source': source, 'dtype': 'number', 'shape': [61]},
'name': {'source': source, 'dtype': 'string', 'shape': []}
}
result_stream_name = 'reduced'
reduced_stream_bundle = run_bundle.compose_descriptor(data_keys=reduced_data_keys,
name=result_stream_name)
yield 'descriptor', reduced_stream_bundle.descriptor_doc

h5 = h5py.File(path, 'r')
frames = []
# TODO -- use the processed data timestamp?
for frame in frames:
yield 'event', frame_stream_bundle.compose_event(data={'frame', frame},
timestamps={'frame', timestamp})

lag_steps = h5['exchange']['tau'][()]
roi_list = h5['xpcs']['dqlist'][()].squeeze()
for g2, err, fit_curve, roi in zip(h5['exchange']['norm-0-g2'][()].T,
h5['exchange']['norm-0-stderr'][()].T,
h5['exchange']['g2avgFIT1'][()].T,
roi_list):
yield 'event', reduced_stream_bundle.compose_event(
data={'g2': g2,
'g2_err': err,
'lag_steps': lag_steps,
'fit_curve': fit_curve,
'name': f'q = {roi:.3g}'},
# TODO -- timestamps from h5?
timestamps={'g2': timestamp,
'g2_err': timestamp,
'lag_steps': timestamp,
'fit_curve': timestamp,
'name': timestamp})

yield 'stop', run_bundle.compose_stop()
3 changes: 3 additions & 0 deletions xicam/XPCS/formats/APSXPCS.yapsy-plugin
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[Core]
Name = xicam.XPCS.formats.APSXPCS
Module = APSXPCS.py
Empty file added xicam/XPCS/formats/__init__.py
Empty file.
62 changes: 62 additions & 0 deletions xicam/XPCS/processing/fitting.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import numpy as np
import skbeam.core.correlation as corr
from astropy.modeling import Fittable1DModel, Parameter, fitting

from xicam.plugins import Input, InputOutput, Output, ProcessingPlugin


class ScatteringModel(Fittable1DModel):
inputs = ('lag_steps',)
outputs = ('g2',)

relaxation_rate = Parameter()

def __init__(self, beta, baseline=1.0, **kwargs):
self.beta = beta
self.baseline = baseline
super(ScatteringModel, self).__init__(**kwargs)

def evaluate(self, lag_steps, relaxation_rate):
return corr.auto_corr_scat_factor(lag_steps, self.beta, relaxation_rate, self.baseline)

def fit_deriv(self, lag_steps, relaxation_rate):
d_relaxation_rate = -2 * self.beta * relaxation_rate * np.exp(-2 * relaxation_rate * lag_steps)
return [d_relaxation_rate]


class FitScatteringFactor(ProcessingPlugin):
name = "Fit Scattering Factor"

g2 = InputOutput(description="normalized intensity-intensity time autocorrelation",
type=np.ndarray,
visible=False)
lag_steps = InputOutput(description="delay time",
type=np.ndarray,
visible=False)

beta = Input(description="optical contrast (speckle contrast), a sample-independent beamline parameter",
type=float,
name="speckle contrast",
default=1.0)
baseline = Input(description="baseline of one time correlation equal to one for ergodic samples",
type=float,
default=1.0)
correlation_threshold = Input("threshold defining which g2 values to fit",
type=float,
default=1.5)

fit_curve = Output(description="fitted model of the g2 curve",
type=np.ndarray)
relaxation_rate = Output(description="relaxation time associated with the samples dynamics",
type=float)

def evaluate(self):
relaxation_rate = 0.01 # Some initial guess
model = ScatteringModel(self.beta.value, self.baseline.value, relaxation_rate=relaxation_rate)
fitter = fitting.SLSQPLSQFitter()
threshold = min(len(self.lag_steps.value), np.argmax(self.g2.value < self.correlation_threshold.value))

fit = fitter(model, self.lag_steps.value[:threshold], self.g2.value[:threshold])

self.relaxation_rate.value = fit.relaxation_rate.value
self.fit_curve.value = fit(self.lag_steps.value)
4 changes: 4 additions & 0 deletions xicam/XPCS/processing/fitting.yapsy-plugin
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[Core]
Name = xicam.XPCS.processing.fitting.FitScatteringFactor
Module = fitting.py

4 changes: 2 additions & 2 deletions xicam/XPCS/processing/fourierautocorrelator.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@


class FourierCorrelation(ProcessingPlugin):
data = Input(description='Array of two or more dimensions.', type=np.ndarray)
data = Input(description='Array of two or more dimensions.', type=np.ndarray, visible=False)

labels = Input(description="""Labeled array of the same shape as the image stack.
Each ROI is represented by sequential integers starting at one. For
example, if you have four ROIs, they must be labeled 1, 2, 3,
4. Background is labeled as 0""", type=np.array)
4. Background is labeled as 0""", type=np.ndarray, visible=False)

g2 = Output(description="""the normalized correlation shape is (len(lag_steps), num_rois)""", type=np.array)

Expand Down
41 changes: 26 additions & 15 deletions xicam/XPCS/processing/onetime.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,36 @@
from xicam.plugins import ProcessingPlugin, Input, Output, InOut
import skbeam.core.correlation as corr
import numpy as np
import skbeam.core.correlation as corr

from xicam.plugins import Input, Output, ProcessingPlugin


class OneTimeCorrelation(ProcessingPlugin):
data = Input(description='Array of two or more dimensions.', type=np.ndarray)
data = Input(description='Array of two or more dimensions.', type=np.ndarray, visible=False)

labels = Input(description="""Labeled array of the same shape as the image stack.
labels = Input(description='''Labeled array of the same shape as the image stack.
Each ROI is represented by sequential integers starting at one. For
example, if you have four ROIs, they must be labeled 1, 2, 3,
4. Background is labeled as 0""", type=np.array)
num_levels = Input(description="""how many generations of downsampling to perform, i.e., the depth of
the binomial tree of averaged frames""", type=int, default=7)
num_bufs = Input(description="""must be even
maximum lag step to compute in each generation of downsampling""", type=int, default=8)
4. Background is labeled as 0''',
type=np.ndarray,
visible=False)
# Set to num_levels to 1 if multi-tau correlation isn't desired,
# then set num_bufs to number of images you wish to correlate
num_levels = Input(description='''How many generations of downsampling to perform, i.e., the depth of
the binomial tree of averaged frames''',
type=int,
default=1,
name='number of levels')
num_bufs = Input(description='must be even maximum lag step to compute in each generation of downsampling',
type=int,
default=1000,
name='number of buffers')

g2 = Output(description="""the normalized correlation shape is (len(lag_steps), num_rois)""", type=np.array)
g2 = Output(description='the normalized correlation shape is (len(lag_steps), num_rois)',
type=np.ndarray)
lag_steps = Output(type=np.ndarray)

def evaluate(self):
self.g2.value, lag_steps = corr.multi_tau_auto_corr(self.num_levels.value,
self.num_bufs.value,
self.labels.value.astype(np.int),
np.array(self.data.value).astype(np.int))
# seems to only work with ints
self.g2.value, self.lag_steps.value = corr.multi_tau_auto_corr(self.num_levels.value,
self.num_bufs.value,
self.labels.value.astype(np.int),
np.asarray(self.data.value))
Loading