Computing Matchup: Small Supercomputer Cluster vs. Single Node
Computing Matchup: Small Supercomputer Cluster vs. Single Node
2018–19 SPS Chapter Research Award
Project Leads: Alex Blose, Ben Kistler, Dany Waller
Chapter Advisor: Charles Brown
Project Summary: Computational methods are powerful tools for solving problems. We built an inexpensive mini supercomputer for use in research, education, outreach, and community building. Once our supercomputer was operational, we then used it to explore how parallel processing impacts the accuracy and processing time of physics-related computing tasks.
Physicists often work with enormous amounts of data that require lots of computational power to analyze. With access to supercomputers, researchers can now run complex computational models and analysis programs more quickly than ever before. Today’s physics students need training in these methods, so our chapter decided to build a mini supercomputer and explore how it processes tasks relevant to physics research.
Our supercomputer is based on Tiny Titan, an inexpensive supercomputer you can build from a cluster of nine Raspberry Pi computers. The cluster can be confined to one Raspberry Pi—known as a “node” in the Tiny Titan build—or more so that you can examine accuracy and processing time as a function of node number.
Approximately 25 SPS members were engaged in this project. The first step was assembling the Raspberry Pis and loading an operating system. Then we connected them in the Tiny Titan configuration and, after some troubleshooting, our supercomputer came alive! Team members first learned how to access nodes from the master unit and pass commands, and then began studying how the number of nodes impacts the accuracy and efficiency of three types of simulations.
Modeling a Complex Fluid
We installed a fluid dynamics simulation, designed by Oak Ridge National Laboratory, on our cluster. The simulation approximates a fluid as a collection of balls. Users input the physical properties of the fluid and the number of balls in the fluid (similar to how you might define the number of pixels in an image), and the program models the resulting behavior. We found that the best way to improve the accuracy and efficiency of the simulation was to increase the number of balls, divide the water into sections, and then assign each section to a different node. More nodes always produced better results.
Estimating π
Irrational numbers like π are notoriously hard to calculate because of their nonalgebraic nature. We estimated π with a Monte Carlo simulation based on the ratio of the area of a circle to the area of the square in which the circle is inscribed. The program computes π using N random points, where N is input by a user. We studied how many digits of π our system could calculate with reasonable accuracy as a function of N and the number of nodes. Our results show that for high values of N, increasing the number of nodes decreases the processing time and improves accuracy.
Employing the Ising Model
The Ising model describes the ferromagnetic behavior of materials. Our goal was to model the effects of temperature on magnetism and look at the transition point where ferromagnetic materials become magnetic or nonmagnetic. We are still developing this code.
In addition to continuing work on the Ising model, we are using our mini supercomputer for outreach and individual research projects. We have met new faculty, graduate students, and classmates and look forward to working with many of them on collaborative projects. The cluster has also been a great recruitment tool and helped foster a more inclusive and welcoming environment in SPS. Overall, this has been an excellent learning experience for everyone in our chapter, drawing in physics undergraduates at all levels to learn new skills and mentor others.
To learn more about this project, visit spsnational.org/awards/sps-chapter-research-award/2019/university-kentucky.