Check out the new USENIX Web site.



next up previous
Next: Programming with Constraints Up: Lessons from the Neighborhood Previous: Abstract

The Neighborhood Viewer

Once upon a time... The context of this story is a larger, interdisciplinary project to extend database management system (DBMS) technology to support, in an integrated fashion, large volumes of neuroscience data from structural, functional, and behavioral studies using rodents. We intend to integrate neuroscience at the data level and thereby both enable cross-disciplinary neuroscience research and drive applied computer science research [1,3]. While our initial focus was dealing with structural data (primarily images obtained from several different modalities, e.g., confocal, laser microscope and digital camera), it quickly became apparent that we had an opportunity to build novel user interfaces. This story describes their evolution.

As we proceeded, it became clear that no one of us was learned enough in both neuroscience and computing to develop these applications. Accordingly, our paradigm required two crucial elements:

mutual education
We needed to educate each
other. While computer scientists were already building scientific databases [7,2], and neuroscientists were computer literate, there was much vocabulary to learn just so we could talk to each other. Also computer scientists needed to learn what neuroscience was about - issues addressed and ways of neuroscientific work. Notable was that neuroscience work changes as new imaging technology becomes available. Neuroscientists needed to learn ways of computer science work and to get a sense of the possible.

joint invention
We worked together to invent interfaces. Several times we iterated through a two stage process. First, computer scientists observed neuroscientists doing neuroscience, and then jointly held brainstorming sessions. Second, the computer scientists designed and built a prototype, brainstormed on the prototype's interface details among themselves, and then jointly examined it with the neuroscientists. All were stimulated to invent new interface features.

 

 


: The neighborhood viewer. The three brain images are the coronal, horizontal and sagittal views. Each view has yoke controls, pan/zoom controls, and separate x, y, and z coordinates.

These elements occurred simultaneously. Indeed, we did not begin to truly understand each other until our minds met at a prototype's interface. Note that we did not prototype as a means of users transmitting known, but unarticulated requirements.
What we did was invent new tools, and, thus, new ways to do neuroscience. Note also that a) we did not try to have each prototype support every valuable feature, b) while eventually much of our data will be stored by a DBMS, we did not insist on using the DBMS for prototyping, and c) the later iterations described next really did overlap somewhat, but the story is simpler to tell if we ignore that.

In the first iteration computer scientists found out that confocal images a) are captured in modest-sized rectangles (about 500x750 pixels) applying 1 to 3 wavelength filters; b) are, in a labor intensive fashion, beginning to be ``montaged'' into larger rectangles. Confocal microscopy allows to capture many images at varying depths (z values) from one specimen. We characterized the data as ``locally dense, but globally sparse'' - most neuroscientific work focuses on a few ``named brain locations.'' Neuroscientists are interested in the spatial juxtaposition of, for example, opioid acceptors and acceptors, each appearing under a different filter. A simple viewer was built that had straightforward pan and zoom features, a bookmark feature, and, mimicking what happens at the microscope, could ``flicker'' between the part of an image captured with different filters. Together we decided that:

It was not hard for us to decide at that point to implement in Tcl/Tk, because a) we were already comfortable with Tcl/Tk, and b) we did not know where the interface would evolve.

In the second iteration computer scientists learned that images were generally from a coronal cutting plane, but that sagittal and horizontal cutting planes (respectively: vertical - front to back; vertical - side to side; and horizontal - top to bottom) are also valuable. They observed neuroscientists, to help themselves determine ``where am I,'' using a rat brain atlas consisting of coronal images with a visual index: a sagittal silhouette plus a ``here'' vertical line. Computer scientists saw expert neuroscience researchers using the atlas much less than graduate students. Together we decided that:

As we added more and varied interface elements, the computer scientists found themselves bogged down. Even with Tcl/Tk our code was becoming too intricate, and implementing got progressively more difficult. This motivated us to consider the constraint model.

In the third iteration computer scientists observed neuroscientists laying out a handful of printed images of the hippocampus of subject rats. Neuroscientists looked at the side-by-side images and discovered things they could not see before. At the same time, computer scientists observed neuroscientists struggling to work with distant collaborators. Packages, phone calls and email were slow and thin communication channels. Together we decided that:

Having had some success in building the neighborhood viewer using a constraint model, it was easy for us to decide to try a distributed constraints model to satisfy the multiple machine and collaboration requirements.

In the fourth iteration, spurred on by bottlenecks caused by handling multiple displays of large images, computer scientists discovered that neuroscientists did not need fully precise images to determine ``where am I'' but to explore ``what is this?'' they required detail. Together we decided to:

Image processing costs led us to search for an alternate solution to these two requirements. Wavelet analysis offered a solution which met both. We found that as little as .2% of the data is sufficient for coarse ``where am I'' decisions, while 2% serves for finer ones. For most ``what is this'' questions 5-10% often works and only occasionally is full detail needed.

In the next sections we present lessons learned in using constraints, distributed constraints and wavelets, overall lessons, and conclusions.



next up previous
Next: Programming with Constraints Up: Lessons from the Neighborhood Previous: Abstract



Alex Safonov
Mon May 27 13:14:56 CDT 1996