Explorations in Parallel and Distributed Computing:Finding Ways to Build Software Faster - that runs Faster

Faculty Sponsor

Nicholas Rosasco


Arts and Sciences


Computer and Information Science

ORCID Identifier(s)


Presentation Type

Poster Presentation

Symposium Date

Summer 7-29-2022


INTRODUCTION: Parallel computing on a budget is a concern to industry and academia alike. Often these solutions are expensive to set up and to maintain, not to mention complex, and must be carefully fit to specific system constraints. Historically, putting this technology in reach, both in the terms of the classroom and the deployment of components, has been challenging. Two different approaches have been pursued in a series of experiments to help reduce the barrier to entry.

OBJECTIVE:The first has been to investigate less-expensive interconnection solutions and configurations to make it easier and less costly to create laboratory configurations of high performance systems. The other has been to investigate less commonly used languages - principally Haskell - for use with the standard environments for distributed computing.

METHODS:To determine the performance of the cluster, we used an array of benchmarks, including compilation of various software programs, as well as locally developed benchmark programs. The benchmark programs are written both in Haskell and C, and are implementations of common problems that demonstrate parallel behavior.

RESULTS:With regards to the ease of implementing a multicore environment, now the dominant hardware paradigm, Haskell has performed surprisingly well. Initial performance has met expectations, but there are interesting phenomena that require further investigation.

CONCLUSION:This research provides a starting point for other future study on these performance questions. This also provides us baselines to compare with other hardware configurations as components and options become available.

This document is currently not available here.