Skip to content

Commit

Permalink
Merge pull request #569 from flatironinstitute/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
epnev authored Jul 5, 2019
2 parents c26be44 + 838f0e5 commit 482b4a2
Show file tree
Hide file tree
Showing 25 changed files with 2,728 additions and 434 deletions.
24 changes: 24 additions & 0 deletions GUI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
## Graphical interface

<img src="docs/img/GUI_img.png" width="1000" align="center">

CaImAn comes with an experimental visual interface. To see an example on how use it, first load and run either
* demo_OnACID_mesoscope.py
* demo_caiman_basic.py
* demo_pipeline.py

Each of these demos will save a results file. You can then start the visual interface by running the following
command from the base caiman folder (make sure you are within your caiman environment):
```
ipython caiman/gui/gui_pyqtgraph_layout.py
```
You will then be asked to load the file that is generated at the end of each file (ending in .hdf5)

A visual interface will appear, where you will be able to:
* regulate gain and contrast for the background image (correlation image)
* regulate the threshold over the spatial masks to visualize components contours
* click on neurons and see the corresponding trace and mask
* select subset of neurons based on different quality metrics
* save the resulting selection to a file in hdf5 format

More features will be added in the future.
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,13 @@ Position available

The CaImAn team is hiring! We're looking for a data scientist/software engineer with a strong research component. For more information please follow [this link](https://simonsfoundation.wd1.myworkdayjobs.com/en-US/simonsfoundationcareers/job/162-Fifth-Avenue/Software-Engineer_R0000500).


CaImAn
======
<img src="https://github.com/flatironinstitute/CaImAn/blob/master/docs/LOGOS/Caiman_logo_FI.png" width="500" align="right">



[![Join the chat at https://gitter.im/agiovann/SOURCE_EXTRACTION_PYTHON](https://badges.gitter.im/agiovann/SOURCE_EXTRACTION_PYTHON.svg)](https://gitter.im/agiovann/SOURCE_EXTRACTION_PYTHON?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)


Expand All @@ -32,6 +34,15 @@ A paper explaining most of the implementation details and benchmarking can be fo

All the results and figures of the paper can be regenerated using this package. For more information visit this [page](https://github.com/flatironinstitute/CaImAn/tree/master/use_cases/eLife_scripts).

## New: Exporting results, GUI and NWB support (July 2019)

You can now use the `save` method included in both the `CNMF` and `OnACID` objects to export the results (and parameters used) of your analysis. The results are saved in an HDF5 file that you can then load in a graphical user interface for more inspection. The GUI will allow you to inspect the results and modify the selected components based on the various quality metrics. For more information click [here](GUI.md)

The [Neurodata Without Borders (NWB)](https://www.nwb.org/) file format is now supported by CaImAn. You read and analyze NWB files and can save the results of the analysis (`Estimates` object) back to the original NWB file. Consult this [demo](use_cases/NWB/demo_pipeline_NWB.py) for an example on how to use this feature.

**To use CaImAn with these additional features you'll need to create a new environment following the usual instructions.**


## New: Removing Keras dependency (June 2019)

To circumvent a problem arising in Windows installation, we recently removed keras from the list of dependencies. Keras was being used to deploy the pretrained neural network models for component screening. The neural network models are being deployed through tensorflow and for that we included tensorflow compatible versions of the models inside the folder `model`. Existing users who already have keras in their environment will continue using keras as it is slightly faster. However if you create an environment without it you may want to either reinstall caimanmanager or simply copy the files `model/*.pb` into the folder `caiman_data/model/` for the files to be discoverable. New CaImAn users do not need to do anything as this is being taken care of during the installation process.
Expand Down
63 changes: 49 additions & 14 deletions caiman/base/movies.py
Original file line number Diff line number Diff line change
Expand Up @@ -971,7 +971,8 @@ def zproject(self, method='mean', cmap=pl.cm.gray, aspect='auto', **kwargs) -> n
return zp

def play(self, gain:float=1, fr=None, magnification=1, offset=0, interpolation=cv2.INTER_LINEAR,
backend:str='opencv', do_loop:bool=False, bord_px=None, q_max=100, q_min = 0, plot_text:bool=False) -> None:
backend:str='opencv', do_loop:bool=False, bord_px=None, q_max=100, q_min = 0, plot_text:bool=False,
save_movie:bool=False, movie_name:str='movie.avi') -> None:
"""
Play the movie using opencv
Expand All @@ -980,7 +981,8 @@ def play(self, gain:float=1, fr=None, magnification=1, offset=0, interpolation=c
fr: framerate, playing speed if different from original (inter frame interval in seconds)
magnification: (undocumented)
magnification: int
magnification factor
offset: (undocumented)
Expand All @@ -990,11 +992,20 @@ def play(self, gain:float=1, fr=None, magnification=1, offset=0, interpolation=c
do_loop: Whether to loop the video
bord_px: (undocumented)
bord_px: int
truncate pixels from the borders
q_max, q_min: (undocumented)
q_max, q_min: float in [0, 100]
percentile for maximum/minimum plotting value
plot_text: (undocumented)
plot_text: bool
show some text
save_movie: bool
flag to save an avi file of the movie
movie_name: str
name of saved file
Raises:
Exception 'Unknown backend!'
Expand Down Expand Up @@ -1047,7 +1058,17 @@ def animate(i):

looping = True
terminated = False

if save_movie:
#fourcc = cv2.VideoWriter_fourcc('8', 'B', 'P', 'S')
#fourcc = cv2.VideoWriter_fourcc(*'XVID')
#fourcc = cv2.VideoWriter_fourcc(*'DIVX')
#fourcc = cv2.VideoWriter_fourcc(*'X264')
fourcc = cv2.VideoWriter_fourcc(*'MP4V')
frame_in = self[0]
if bord_px is not None and np.sum(bord_px) > 0:
frame_in = frame_in[bord_px:-bord_px, bord_px:-bord_px]
out = cv2.VideoWriter(movie_name, fourcc, 30.,
tuple([int(magnification*s) for s in frame_in.shape[::-1]]))
while looping:

for iddxx, frame in enumerate(self):
Expand All @@ -1066,7 +1087,8 @@ def animate(i):
frame.shape[0] - (text_height + 5)), fontFace=5, fontScale=0.8, color=(255, 255, 255), thickness=1)

cv2.imshow('frame', frame)

if save_movie:
out.write(frame.astype('uint8'))
if cv2.waitKey(int(1. / fr * 1000)) & 0xFF == ord('q'):
looping = False
terminated = True
Expand Down Expand Up @@ -1094,6 +1116,10 @@ def animate(i):
if terminated:
break

if save_movie:
out.release()
save_movie = False

if do_loop:
looping = True
else:
Expand Down Expand Up @@ -1165,10 +1191,11 @@ def load(file_name, fr:float=30, start_time:float=0, meta_data:Dict=None, subind
if shape is not None:
logging.error('shape not supported for multiple movie input')

return load_movie_chain(file_name,fr=fr, start_time=start_time,
return load_movie_chain(file_name, fr=fr, start_time=start_time,
meta_data=meta_data, subindices=subindices,
bottom=bottom, top=top, left=left, right=right,
channel = channel, outtype=outtype)
channel = channel, outtype=outtype,
var_name_hdf5=var_name_hdf5)

if max(top, bottom, left, right) > 0:
logging.error('top bottom etc... not supported for single movie input')
Expand Down Expand Up @@ -1328,16 +1355,22 @@ def rgb2gray(rgb):
fkeys = list(f.keys())
if len(fkeys) == 1:
var_name_hdf5 = fkeys[0]

if extension == '.nwb':
fgroup = f[var_name_hdf5]['data']
else:
fgroup = f[var_name_hdf5]

if var_name_hdf5 in f:
if subindices is None:
images = np.array(f[var_name_hdf5]).squeeze()
images = np.array(fgroup).squeeze()
#if images.ndim > 3:
# images = images[:, 0]
else:
if type(subindices).__module__ is 'numpy':
subindices = subindices.tolist()
images = np.array(
f[var_name_hdf5][subindices]).squeeze()
fgroup[subindices]).squeeze()
#if images.ndim > 3:
# images = images[:, 0]

Expand Down Expand Up @@ -1398,9 +1431,10 @@ def rgb2gray(rgb):


def load_movie_chain(file_list:List[str], fr:float=30, start_time=0,
meta_data=None, subindices=None,
meta_data=None, subindices=None, var_name_hdf5:str='mov',
bottom=0, top=0, left=0, right=0, z_top=0,
z_bottom=0, is3D:bool=False, channel=None, outtype=np.float32) -> Any:
z_bottom=0, is3D:bool=False, channel=None,
outtype=np.float32) -> Any:
""" load movies from list of file names
Args:
Expand All @@ -1423,7 +1457,8 @@ def load_movie_chain(file_list:List[str], fr:float=30, start_time=0,
mov = []
for f in tqdm(file_list):
m = load(f, fr=fr, start_time=start_time,
meta_data=meta_data, subindices=subindices, in_memory=True, outtype=outtype)
meta_data=meta_data, subindices=subindices,
in_memory=True, outtype=outtype, var_name_hdf5=var_name_hdf5)
if channel is not None:
logging.debug(m.shape)
m = m[channel].squeeze()
Expand Down
93 changes: 76 additions & 17 deletions caiman/base/timeseries.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,11 @@
from scipy.io import savemat
import tifffile
import warnings

from datetime import datetime
from dateutil.tz import tzlocal
from pynwb import NWBHDF5IO, NWBFile
from pynwb.ophys import TwoPhotonSeries, OpticalChannel
from pynwb.device import Device
from caiman.paths import memmap_frames_filename

try:
Expand Down Expand Up @@ -114,10 +118,31 @@ def __array_finalize__(self, obj):
self.file_name = getattr(obj, 'file_name', None)
self.meta_data = getattr(obj, 'meta_data', None)

def save(self, file_name, to32=True, order='F',imagej=False, bigtiff=True,
software='CaImAn', compress=0, var_name_hdf5='mov'):
def save(self,
file_name,
to32=True,
order='F',
imagej=False,
bigtiff=True,
excitation_lambda=488.0,
compress=0,
var_name_hdf5='mov',
sess_desc='some_description',
identifier='some identifier',
exp_desc='experiment description',
imaging_plane_description='some imaging plane description',
emission_lambda=520.0,
indicator='OGB-1',
location='brain',
starting_time=0.,
experimenter='Dr Who',
lab_name='',
institution='',
experiment_description='Experiment Description',
session_id='Session ID'):
"""
Save the timeseries in various formats
Save the timeseries in single precision. Supported formats include
TIFF, NPZ, AVI, MAT, HDF5/H5, MMAP, and NWB
Args:
file_name: str
Expand All @@ -140,23 +165,16 @@ def save(self, file_name, to32=True, order='F',imagej=False, bigtiff=True,
extension = extension.lower()
logging.debug("Parsing extension " + str(extension))

if extension == '.tif': # load avi file

if extension == '.tif':
with tifffile.TiffWriter(file_name, bigtiff=bigtiff, imagej=imagej) as tif:


for i in range(self.shape[0]):
if i % 200 == 0:
logging.debug(str(i) + ' frames saved')

curfr = self[i].copy()
if to32 and not('float32' in str(self.dtype)):
curfr = curfr.astype(np.float32)

curfr = curfr.astype(np.float32)
tif.save(curfr, compress=compress)



elif extension == '.npz':
if to32 and not('float32' in str(self.dtype)):
input_arr = self.astype(np.float32)
Expand All @@ -165,7 +183,6 @@ def save(self, file_name, to32=True, order='F',imagej=False, bigtiff=True,

np.savez(file_name, input_arr=input_arr, start_time=self.start_time,
fr=self.fr, meta_data=self.meta_data, file_name=self.file_name)

elif extension == '.avi':
codec = None
try:
Expand Down Expand Up @@ -221,8 +238,7 @@ def save(self, file_name, to32=True, order='F',imagej=False, bigtiff=True,
logging.warning('No file saved')
if self.meta_data[0] is not None:
logging.debug("Metadata for saved file: " + str(self.meta_data))
dset.attrs["meta_data"] = cpk.dumps(self.meta_data)

dset.attrs["meta_data"] = cpk.dumps(self.meta_data)
elif extension == '.mmap':
base_name = name

Expand All @@ -239,12 +255,55 @@ def save(self, file_name, to32=True, order='F',imagej=False, bigtiff=True,
fname_tot = memmap_frames_filename(base_name, dims, T, order)
fname_tot = os.path.join(os.path.split(file_name)[0], fname_tot)
big_mov = np.memmap(fname_tot, mode='w+', dtype=np.float32,
shape=(np.uint64(np.prod(dims)), np.uint64(T)), order=order)
shape=(np.uint64(np.prod(dims)),
np.uint64(T)), order=order)

big_mov[:] = np.asarray(input_arr, dtype=np.float32)
big_mov.flush()
del big_mov, input_arr
return fname_tot
elif extension == '.nwb':
if to32 and not('float32' in str(self.dtype)):
input_arr = self.astype(np.float32)
else:
input_arr = np.array(self)
# Create NWB file
nwbfile = NWBFile(sess_desc, identifier,
datetime.now(tzlocal()),
experimenter=experimenter,
lab=lab_name,
institution=institution,
experiment_description=experiment_description,
session_id=session_id)
# Get the device
device = Device('imaging_device')
nwbfile.add_device(device)
# OpticalChannel
optical_channel = OpticalChannel('OpticalChannel',
'main optical channel',
emission_lambda=emission_lambda)
imaging_plane = nwbfile.create_imaging_plane(name='ImagingPlane',
optical_channel=optical_channel,
description=imaging_plane_description,
device=device,
excitation_lambda=excitation_lambda,
imaging_rate=self.fr,
indicator=indicator,
location=location)
# Images
image_series = TwoPhotonSeries(name=var_name_hdf5, dimension=self.shape[1:],
data=input_arr,
imaging_plane=imaging_plane,
starting_frame=[0],
starting_time=starting_time,
rate=self.fr)

nwbfile.add_acquisition(image_series)

with NWBHDF5IO(file_name, 'w') as io:
io.write(nwbfile)

return file_name

else:
logging.error("Extension " + str(extension) + " unknown")
Expand Down
Loading

0 comments on commit 482b4a2

Please sign in to comment.