Not just fun and games (Image: Jonathan Alcorn/Bloomberg/Getty Images)
Innovation is our regular column that highlights emerging technological ideas and where they may lead
Blasting zombies may seem to have little to do with serious research, but video game hardware is helping scientists in a variety of ways including helping them to unravel the mysteries of the brain.
Specialist programmers have long been repurposing the graphics processing units (GPUs) that power action-packed scenes in games for non-graphics tasks. Now recent advances have opened up GPU-based supercomputing to non-specialists.
Advertisement
GPUs have greater raw computational power than conventional CPUs, but have a more limited repertoire of tasks. Combining hundreds of individual processors, they excel at applying simple repetitive calculations to large bodies of data.
of the Massachusetts Institute of Technology is using them in his efforts to crack the brain’s formula for recognising objects in images. “The interesting thing about a GPU is that they are made to produce a visual world,” he says. “What we want to do is reverse that process.
Hidden rules
“When an object moves across your retina, it will obey certain rules, the physical rules of the world,” Pinto says. “We are trying to learn these rules from scratch.”
Last year, for less than $3000, he built a 16-GPU “monster” desktop supercomputer to generate and test over 7000 possible variations of an object-recognition algorithm on video clips.
To test each model, Pinto’s makeshift supercomputer performed statistical analysis in both space and time on thousands of frames of video to find objects moving through the scene. Selecting for the models best able to decipher the action, he .
He says this kind of work would previously have only been possible with a fully fledged supercomputer.
“If we weren’t newcomers in this field and could apply for multi-million dollar grants, then yes, we could probably get one of these massive computers from IBM,” he says. “But if money is an issue, or you are a newcomer, that is too expensive. It’s very cheap to buy a GPU and explore.”
Easy power
The latest graphics cards, from manufacturers ATI and Nvidia have 512 individual processors. By dividing the work among these processors, they can reach speeds of half a trillion calculations per second.
Previously it took specialist programming skills to set GPUs to work on serious, non-graphics science, but the process was difficult and time-consuming.
“The path from describing the problem to getting results was pretty treacherous,” says Nvidia general manager Andy Keane.
“Things were in computer graphics shader languages and texture coordinates – none of the stuff we were used to in scientific computing,” says , director of the Scientific Computing and Imaging Institute at the University of Utah in Salt Lake City. “It was extraordinarily difficult to map your problem to a GPU.”
Johnson says this changed around 2007 with the advent of new programming languages that make it easier for programmers without specialist graphics experience to program GPUs. Since then, researchers in both academia and industry have used them to, for example, , and rapidly .
Exaflops beckons
While GPUs make desktop supercomputing accessible to a wide range of researchers, flagship computing centres such as Oak Ridge National Laboratory in Tennessee have also taken notice. Oak Ridge announced last October that its next supercomputer, predicted to be the world’s fastest, would be .
“As we look at how to get the next 1000 times faster, to an exaflops, or 1018 calculations per second, we see a lot of big challenges,” says , a project director at Oak Ridge.
He says that the lab already uses clusters of GPUs for some number-crunching computing tasks such as climate modelling and simulations of supernovas. He says that increased precision and speed, along with reduced power consumption, make the cards an attractive option for the next generation of supercomputers. “We think this is one path to getting the higher-performance computing that we need.”
Read previous Innovation columns: The Wi-Fi database that shamed Google, One web language to rule them all, Robots look to the cloud for enlightenment, iPad is child’s play but not quite magical, Only mind games will make us save power, Gaze trackers eye computer gamers, Market research wants to open your skull, Sending botnets the way of smallpox, Bloom didn’t start a fuel-cell revolution.
![Astronomers have long known that understanding how star clusters come to be is key to unlocking other secrets of galactic evolution. Stars form in clusters, created when clouds of gas collapse under gravity. As more and more stars are born in a collapsing cloud, strong stellar winds, harsh ultraviolet radiation and the supernova explosions of massive stars eventually disperse the cloud, and their light can bear down on other star-forming regions in the galaxy. This process is called stellar feedback, and it means that most of the gas in a galaxy never gets used for star formation. Researching how star clusters develop can answer questions about star formation at a galactic scale. Now, the state of the art has been further developed with both Hubble and Webb working together to provide a broad-spectrum view of thousands of young star clusters. An international team of astronomers has pored over images of four nearby galaxies from the FEAST observing programme (#1783), trying to solve this mystery. Their results show that it is the most massive star clusters that clear away their gaseous shroud the fastest, and begin lighting their galaxy the earliest. The team identified nearly 9000 star clusters in the four galaxies in different evolutionary stages: young clusters just starting to emerge from their natal clouds of gas, clusters that had partially dispersed the gas (both from Webb images), and fully unobstructed clusters visible in optical light (found in Hubble images). With Webb???s ability to peer inside the gas clouds, they were able to then estimate the mass and age of each cluster from its light spectrum. This image shows a section of one of the spiral arms of Messier 51 (M51), one of the four galaxies studied in this work, as seen by Webb???s Near-Infrared Camera (NIRCam). The thick clumps of star-forming gas are shown here in red and orange, representing infrared light emitted by ionised gas, dust grains, and complex molecules such as polycyclic aromatic hydrocarbons (PAHs). Within these gas complexes, each tens or hundreds of light years across, Webb reveals the dense, extremely bright clusters of massive stars that have just recently formed. The countless stars strewn across the arm of the galaxy, many of which would be invisible to our eyes behind layers of dust, are also laid bare in infrared light. [Image description: A large, long portion of one of the spiral arms in galaxy M51. Red-orange, clumpy filaments of gas and dust that stretch in a chain from left to right comprise the arm. Shining cyan bubbles light up parts of the gas clouds from within, and gaps expose bright star clusters in these bubbles as glowing white dots. The whole image is dotted with small stars. A faint blue glow around the arm colours the otherwise dark background.]](https://images.newscientist.com/wp-content/uploads/2026/05/13114322/SEI_296271016.jpg)


