Exploring Large Data for Scientific Discovery

August 28, 2015

A curse of dealing with mounds of data so massive that they require special tools, said computer scientist Valerio Pascucci, is if you look for something, you will probably find it, thus injecting bias into the analysis.

Valerio Pascucci

In his plenary talk titled “Extreme Data Management Analysis and Visualization: Exploring Large Data for Science Discovery” on July 28 during the XSEDE15 conference in St. Louis, Dr. Pascucci said that getting clean, guaranteed, unbiased results in data analyses requires highly interdisciplinary, multi-scale collaboration and techniques that unify the math and computer science behind the applications used in physics, biology, and medicine.

The techniques and use cases he shared during his talk reflected about a decade and a half of getting down in the trenches to understand research efforts in disparate scientific domains, cutting through semantics, and capturing extensible mathematical foundations that could be applied in developing robust, efficient algorithms and applications.

Fewer Tools but Greater Utility

“You can build an economy of tools by deconstructing the math, looking for commonalities, and developing fewer tools that can be of use to more people,” Pascucci said in a post-talk interview. And to avoid developing biased algorithms, “you try to delay as long as possible application development.” The goal, he noted, is to create techniques that leave little room for mental shortcuts, or heuristics, and emphasize a formalized mathematical approach.

Creating those techniques, however, requires cross-pollination between, and integration of, data management and data analysis, tasks that have traditionally been performed by different communities, Pascucci pointed out. Collaboration that combines those efforts, he added, is a necessary ingredient for a successful supercomputing center or cyberinfrastructure—a dynamic ecosystem of people, advanced computing systems, data, and software.

Processing on the Fly

In managing large datasets, a platform for processing on the fly is important, said Pascucci, because researchers need to be able to make decisions under incomplete information. “This is something that people often underestimate,” he added.

One innovation that Pascucci and his colleagues at the Center for Extreme Data Management, Analysis, and Visualization (CEDMAV) at the University of Utah have developed is a framework, called ViSUS, for processing large-scale scientific data with high-performance selective queries on multiple terabytes of raw data. This data model is combined with progressive streaming techniques that allow processing on a variety of computing devices, from iPhone to simple workstations, to the input/output of parallel supercomputers. The framework has, for example, enabled the real-time streaming of massive combustion simulations from U.S. Department of Energy platforms to national laboratories.

Read more from HPC Wire . . .


More news from the School of Computing