Date18th, Dec 2022

Summary:

Frontier, the world’s first exascale supercomputer—or at least the first one that’s been made public—is coming online soon for general scientific use at Oak Ridge National Laboratory in Tennessee. Another such machine, Aurora, is seemingly on track to be completed any day at Argonne National Laboratory in Illinois. Now Europe’s getting up to speed. Through a €500 million pan-European effort, an exascale supercomputer called JUPITER (Joint Undertaking Pioneer for Innovative and Transformative Exascale Research) will be installed sometime in 2023 at the Forschungszentrum Jülich, in Germany.

Full text:

Frontier, the world’s first exascale supercomputer—or at least the first one that’s been made public—is coming online soon for general scientific use at Oak Ridge National Laboratory in Tennessee. Another such machine, Aurora, is seemingly on track to be completed any day at Argonne National Laboratory in Illinois. Now Europe’s getting up to speed. Through a €500 million pan-European effort, an exascale supercomputer called JUPITER (Joint Undertaking Pioneer for Innovative and Transformative Exascale Research) will be installed sometime in 2023 at the Forschungszentrum Jülich, in Germany. Thomas Lippert, director of the Jülich Supercomputing Center, likens the addition of JUPITER, and the expanding supercomputing infrastructure in Europe more broadly, to the construction of an astonishing new telescope. “We will resolve the world much better,” he says. The European Union–backed high-performance computing arm, EuroHPC JU, is underwriting half the cost of the new exascale machine. The rest comes from German federal and state sources. This article is part of our special report Top Tech 2023. Exascale supercomputers can, by definition, surpass an exaflop—more than a quintillion floating-point operations per second. Doing so requires enormous machines. JUPITER will reside in a cavernous new building housing several shipping-container-size water-cooled enclosures. Each of these enclosures will hold a collection of closet-size racks, and each rack will support many individual processing nodes. How many nodes will there be? The numbers for JUPITER aren’t yet set, but you can get some idea from JUWELS (shorthand for Jülich Wizard for European Leadership Science), a recently upgraded system currently ranking 12th on the Top500 list of the world’s most powerful supercomputers. JUPITER will sit close by but in a separate building from JUWELS, which boasts more than 3,500 computing nodes all told. With contracts still out for bid at press time, scientists at the center were keeping schtum on the chip specs for the new machine. Even so, the overall architecture is established, and outsiders can get some hints about what to expect by looking at the other brawny machines at Jülich and elsewhere in Europe. JUPITER will rely on GPU-based accelerators alongside a universal cluster module, which will contain CPUs. The planned architecture also includes high-capacity disk and flash storage, along with dedicated backup units and tape systems for archival data storage. The JUWELS supercomputer uses Atos BullSequana X hardware, with AMD EPYC processors and Mellanox HDR InfiniBand interconnects. The most recent EuroHPC-backed supercomputer to come online, Finland-based LUMI (short for Large Unified Modern Infrastructure) uses HPE Cray hardware, AMD EPYC processors, and HPE Slingshot interconnects. LUMI is currently ranked third in the world. If Jupiter follows suit, it may be similar in many respects to Frontier, which hit exascale in May 2022, also using Cray hardware with AMD processors. Harnessing Europe’s new supercomputing horsepower “The computing industry looks at these numbers to measure progress, like a very ambitious goal: flying to the moon,” says Christian Plessl, a computer scientist at Paderborn University, in Germany. “The hardware side is just one aspect. Another is, How do you make good use of these machines?” Plessl has teamed up with chemist Thomas Kühne to run atomic-level simulations of both HIV and the spike protein of SARS-CoV2, the virus that causes COVID-19. Last May, the duo ran exaflop-scale calculations for their SARS simulation—involving millions of atoms vibrating on a femtosecond timescale—with quantum-chemistry software running on the Perlmutter supercomputer. They exceeded an exaflop because these calculations were done at lower resolutions, of 16 and 32 bits, as opposed to the 64-bit resolution that is the current standard for counting flops. “The computing industry looks at these numbers to measure progress, like a very ambitious goal: flying to the moon.” Kühne is excited by JUPITER and its potential for running even more demanding high-throughput calculations, the kind of calculations that might show how to use sunlight to split water into hydrogen and oxygen for clean-energy applications. Jose M. Cela at the Barcelona Supercomputing Center says that exascale capabilities are essential for certain combustion simulations, for really-large-scale fluid dynamics, and for planetary simulations that encompass whole climates. Lippert looks forward to a kind of federated supercomputing, where the several European supercomputer centers will use their huge machines in concert, distributing calculations to the appropriate supercomputers via a service hub. Cela says communication speeds between centers aren’t fast enough yet to manage this for some problems—a gas-turbine combustion simulation, for example, must be done inside a single machine. But this approach could be useful for certain problems in the life sciences, such as in genetic and protein analysis. The EuroHPC JU’s Daniel Opalka says European businesses will also make use of this burgeoning supercomputing infrastructure. Even as supercomputers get faster and larger, they must work harder to be more energy efficient. That’s especially important in Europe, which is enduring what may be a long, costly energy crisis. JUPITER will draw 15 megawatts of power during operation. Plans call for it to run on clean energy. With wind turbines getting bigger and better, JUPITER’s energy demands could perhaps be met with just a couple of mammoth turbines. And with cooling water circulating among the mighty computing boxes, the hot water that results could be used to heat homes and businesses nearby, as is being done with LUMI in Finland. It’s one more way this computing powerhouse will be tailored to the EU’s energy realities. This article appears in the January 2023 print issue as “Exascale Comes to Europe.” Top Tech 2023 Top Tech 2023: A Special ReportPreview exciting technical developments for the coming year. Can This Company Dominate Green Hydrogen? Fortescue will need more electricity-generating capacity than France. An Airship Resurgence Pathfinder 1 could herald a new era for zeppelins A New Way to Speed Up Computing Blue microLEDs bring optical fiber to the processor. The Personal-Use eVTOL Is (Almost) Here Opener’s BlackFly is a pulp-fiction fever dream with wings. Baidu Will Make an Autonomous EV Its partnership with Geely aims at full self-driving mode. China Builds New Breeder Reactors The power plants could also make weapons-grade plutonium. Economics Drives a Ray-Gun ResurgenceLasers should be cheap enough to use against drones.A Cryptocurrency for the Masses or a Universal ID?What Worldcoin’s killer app will be is not yet clear.IBM’s Quantum LeapThe company’s Condor chip will boast more than 1,000 qubits.Arthritis Gets a JoltVagus-nerve stimulation promises to help treat autoimmune disorders.Smartphones Become SatphonesNew satellites can connect directly to your phone.Exascale Comes to EuropeThe E.U.’s first exascale supercomputer will be built in Germany.The Short ListA dozen more tech milestones to watch for in 2023.From Your Site ArticlesTop500: Frontier Still No. 1. Where’s China? ›How the World’s Most Powerful Supercomputer Inched Toward the Exascale ›The Beating Heart of the World’s First Exascale Supercomputer ›Related Articles Around the WebFirst European 'exascale' supercomputer to be hosted in Germany ... ›The EU enters the exascale era with the announcement of new ... ›DOE Explains...Exascale Computing | Department of Energy ›
Magazines love to dabble in prognostication, particularly when it comes to emerging technology. Startups show such futuristic pronouncements to potential investors, who in turn use them as data points to inform their bets. And as readers, we gravitate toward them, if only so we can feel superior when, say, a highly anticipated product launch bombs—chef’s kiss to Mark Zuckerberg for the schadenfreude fest that is the metaverse. Think of IEEESpectrum’s annual technology forecast as prognostication filtered through a skeptical lens and years of ongoing coverage of technological advances from lab to market. Each January, we look at projects across the globe and from a range of engineering disciplines that will have major milestones in the coming year. While some technologies flop and fade away, others produce multiple hype cycles that raise and then dash hopes again and again. Take flying cars. Back in 2007, we predicted that the flying-car startup Terrafugia would fail. The company lurched along for years until finally ceasing U.S. operations in 2021, ironically just after receiving a Special Light-Sport Aircraft airworthiness certificate from the U.S. Federal Aviation Administration. But as we wrote in 2014, flying cars are an idea that will not die. And even though the road—or the flight path—to commercial success is riddled with regulatory and social obstacles, eVTOLs (a newer and less sullied, acronym-a-licious moniker for flying cars) have attracted billions of dollars in investment in recent years. And now, the sector seems poised to finally take off, as Editorial Director of Content Development Glenn Zorpette reports in his story about Opener’s BlackFly eVTOL. Flying cars illustrate one path that emerging technologies follow, with innovators and investors taking chances and failing early on. True believers learn from those failures, ultimately leading to solutions that are then brought to market. Sometimes, though, externalities like a changing climate fast-track technologies that have been languishing in development for decades. Back in 2001, Senior Editor Michael J. Riezenman wrote about hydrogen fuel cells as a promising answer to long-haul transportation needs. Back then, the hydrogen economy seemed right around the corner. Fast-forward 22 years and Contributing Editor Peter Fairley reports on two Australian companies that aim to use hydrogen to make a big dent in the country’s greenhouse-gas emissions. One company is using renewable energy to produce hydrogen as fuel for huge trucks to haul zinc ore. The other is developing a new generation of electrolyzers to produce hydrogen for export, although exactly how that will work has yet to be determined. Thank the pressures of the climate crisis for this green-hydrogen boom. Cryptocurrencies, which we’ve been covering since they emerged, have imploded over the last several months. This crypto winter has soured many people on that particular application of blockchain technology, but there are many other, perhaps more promising ways to apply a blockchain. One is as a means of providing proof of personhood, as the journalist Edd Gent explores in his critical look at Worldcoin. The company’s founders want Worldcoin to be not only a global currency that will somehow redistribute wealth via universal basic income but also a secure means of biometric identification, with a dose of buzzy, Web3 facilitation thrown in for good measure. And so, while crypto is tanking and the NFT market has fizzled, something useful may yet rise from the ashes of Web3. I’ll go out on a limb here and predict that Web3 will be recalled in years to come as a figment of some collective pandemic fever dream. Check back in a few years to see how that prognostication pans out. Meanwhile, have some fun with this issue. And for IEEE members, enjoy your exclusive member benefit: online access to our feature archives going back to 2000. Log in to the Spectrum website to trace how technologies like lidar and microLEDs have developed into components that now enable other technologies—a new generation of blimps and optical interconnects for chiplets, respectively—which are also featured in this issue.From Your Site Articles10 Lessons From the Legacy of Apple’s Steve Jobs ›Top Tech 2022: A Special Report ›January 2023 - IEEE Spectrum ›Related Articles Around the WebThe 7 Fundamentals for Succeeding in Innovation ... ›Is There Such A Thing As Too Much Innovation? | On Point ›The eight essentials of innovation | McKinsey ›Keep Reading ↓Show less
This is a sponsored article brought to you by 321 Gang.To fully support Requirements Management (RM) best practices, a tool needs to support traceability, versioning, reuse, and Product Line Engineering (PLE). This is especially true when designing large complex systems or systems that follow standards and regulations. Most modern requirement tools do a decent job of capturing requirements and related metadata. Some tools also support rudimentary mechanisms for baselining and traceability capabilities (“linking” requirements). The earlier versions of IBM DOORS Next supported a rich configurable traceability and even a rudimentary form of reuse. DOORS Next became a complete solution for managing requirements a few years ago when IBM invented and implemented Global Configuration Management (GCM) as part of its Engineering Lifecycle Management (ELM, formerly known as Collaborative Lifecycle Management or simply CLM) suite of integrated tools. On the surface, it seems that GCM just provides versioning capability, but it is so much more than that. GCM arms product/system development organizations with support for advanced requirement reuse, traceability that supports versioning, release management and variant management. It is also possible to manage collections of related Application Lifecycle Management (ALM) and Systems Engineering artifacts in a single configuration.On-demand Webinar Available NowGlobal Configuration Management - The Game Changer for Requirements ManagementIn this presentation we will build the case for component-based requirements management, illustrate Global Configuration Management concepts, and focus on various Component Usage Patterns within the context of GCM 7.0.2 and IBM’s Engineering Lifecycle (ELM) suite of tools.Watch on-demand webinar now →Before GCM, Project Areas were the only containers available for organizing data. Project Areas could support only one stream of development. Enabling application local Configuration Management (CM) and GCM allows for the use of Components. Components are contained within Project Areas and provide more granular containers for organizing artifacts and new configuration management constructs; streams, baselines, and change sets at the local and global levels. Components can be used to organize requirements either functionally, logically, physically, or using some combination of the three. A stream identifies the latest version of a modifiable configuration of every artifact housed in a component. The stream automatically updates the configuration as new versions of artifacts are created in the context of the stream. The multiple stream capability in components equips teams the tools needed to seamlessly manage multiple releases or variants within a single component.GCM arms product/system development organizations with support for advanced requirement reuse, traceability that supports versioning, release management, and variant management.Prior to GCM support, the associations between Project Areas would enable traceability between single version of ALM artifacts. With GCM, virtual networks of components can be constructed allowing for traceability between artifacts across components – between requirements components and between artifacts across other ALM domains (software, change management, testing, modeling, product parts, etc.). 321 Gang has defined common usage patterns for working with components and their streams. These patterns include Variant Development, Parallel Release Management, Simple Single Stream Development, and others. The GCM capability for virtual networks and the use of some of these patterns provide a foundation to support PLE.The 321 Gang has put together a short webinar also titled Global Configuration Management: A Game Changer for Requirements Management, that expands on the topics discussed here. During the webinar we build a case for component-based requirements management, illustrate Global Configuration Management concepts, and introduce common GCM usage patterns using ELM suite of tools. Watch this on-demand webinar now.Related Articles Around the Web321 Gang, Inc | LinkedIn ›321Gang, Inc. ›Home Page 2022 - 321 Gang Inc. ›Keep Reading ↓Show less