fbpx

Is Today’s Supercomputing Racing to Replace the Human Brain?

*Get your crypto project/offer published on this blog and news sites. Email:contact@cryptoexponentials.com

Earlier this month, the Aurora supercomputer made headlines by climbing to the second spot among the world’s most powerful supercomputers. Clocking in at a staggering 1.012 exaflops, Aurora is a marvel of modern engineering. An exaflop, for those unfamiliar, equates to one quintillion floating-point operations per second, a figure so vast it’s almost beyond human comprehension. What was once relegated to the realms of science fiction has become a reality, with exascale computing now feasible on two supercomputers based in the United States.

Aurora resides in Illinois at the Argonne National Laboratory, while its counterpart, Frontier, is in Tennessee at the Oak Ridge National Laboratory. Both fall under the aegis of the U.S. Department of Energy (DOE), a fitting arrangement given the immense power requirements of these colossal computing systems.

The Energy Paradox

Frontier, currently the world’s top supercomputer, operates at an astonishing 1.206 exaflops. This beast of a machine, built on Hewlett Packard Enterprise (HPE) Cray supercomputers, relies heavily on the advanced central processing units (CPUs) and graphics processing units (GPUs) from Advanced Micro Devices (AMD). These powerful chips endow Frontier with the capability to tackle complex scientific problems ranging from nuclear fusion to cosmology, climate models, and subatomic particle research.

However, such remarkable computational power comes at a hefty cost – Frontier requires 21 megawatts of electricity to function, enough to power approximately 20,000 homes. This starkly contrasts with the human brain, a biological supercomputer that operates on a mere 12 watts, less than a typical light bulb. Despite its relatively minuscule power consumption, the human brain boasts 100 billion neurons and the capacity for 100 trillion parameters, a level of complexity no artificial system has yet matched.

Understanding the Brain: A Complex Challenge

The quest to replicate the efficiency of the human brain has spurred significant interest and research in biological computing. The field of connectomics, which studies the intricate connections between brain cells, aims to unravel the secrets of the brain’s extraordinary efficiency. This endeavor recently achieved a milestone when Google, in collaboration with Harvard researchers, published groundbreaking research in the journal Science.

The team successfully mapped a single cubic millimeter of human temporal cortex using electron microscopy, imaging a piece of brain tissue about the size of half a grain of rice. The data collected was staggering, amounting to 1.4 million gigabytes (or 1.4 petabytes). For context, the average smartphone holds around 128 gigabytes, making this dataset roughly 11,000 times larger.

This minuscule volume of brain tissue revealed 16,000 neurons, 32,000 glia, 8,000 blood vessel cells, and 150 million synapses. This feat was made possible by advanced machine learning (ML) software, highlighting the role of artificial intelligence (AI) in advancing our understanding of the human brain.

AI and Human Brain Mapping

This achievement builds on previous work, including a 2020 project where the same Google team mapped a portion of a fruit fly brain, revealing the connections of 25,000 neurons. The rapid progress in just four years underscores the transformative impact of AI and ML in connectomics.

One of the more intriguing findings from the recent research was the discovery of a rare class of synaptic connections. A portion of one neuron (blue) was observed making more than 50 connections (yellow) with another neuron (green). While the significance of this discovery is not yet fully understood, it represents a crucial step toward comprehending the brain’s complex architecture and functionality.

The Race for Ultra-Intelligent AI

While understanding the brain remains a significant scientific challenge, the semiconductor and computing industries are racing to develop technology that mimics the brain’s capabilities. The goal is not just to replicate the brain’s energy efficiency but to manage and process vast amounts of information with comparable sophistication.

Source: Graphcore

One company at the forefront of this race is Graphcore, a private firm that has developed the Intelligence Processing Unit (IPU). Graphcore’s IPU, first released in 2022, features a unique stacked wafer-on-wafer design, enabling a 3D semiconductor architecture. This design allows for massive parallel processing, distinguishing it from traditional CPUs and GPUs.

The Graphcore IPU and the Good Computer

Graphcore’s IPU is designed for multiple instructions and multiple data inputs, allowing each processor to function independently, maximizing computational throughput. This innovative approach is set to culminate in the release of the Good Computer, named after computer scientist Jack Good, who envisioned a machine more capable than the human brain back in 1965.

The Good Computer, slated for delivery in 2024, will be powered by the next generation of Graphcore IPUs and will boast over 10 exaflops of AI-specific computational power. It will support AI models with sizes of up to 500 trillion parameters, a scale previously unimaginable. This development heralds the dawn of artificial general intelligence (AGI) and potentially artificial superintelligence (ASI).

The Cost and Pace of AI Advancements

What’s equally remarkable is that the Good Computer is expected to cost “only” $120 million. In comparison, today’s most advanced large language models, like OpenAI’s GPT-4, which has 200 billion parameters, cost hundreds of millions of dollars to train. The declining cost per unit of computing power is accelerating the pace of AI innovation.

The advancements in AI-specific semiconductors and related computing systems are occurring at an exponential rate, surpassing even Moore’s law. This rapid progression means that AI software development is becoming faster and cheaper, driving unprecedented innovation.

The Future of AI and Human Intelligence

We are on the brink of a new era where the world will be alive with a network of intelligence far more powerful than the collective brains of the human race. The implications of having intelligent machines capable of autonomous research and development are profound. These advancements will revolutionize industries, drive scientific breakthroughs, and transform our understanding of intelligence itself.

Conclusion

As we stand at the bleeding edge of technology, the race to replicate and surpass the capabilities of the human brain is accelerating. The achievements in exascale computing, AI, and connectomics are converging to create a future where machines not only complement human intelligence but exceed it. We are on an incredible journey, exploring the outer limits of what’s possible, and the discoveries we make along the way will shape the future of humanity.

Recommended Reading: Navigating the Convergence of Crypto and AI Technologies

Crypto Exponentials

What started out as a curiosity to learn about Bitcoin during the year 2016 has turned into a mission to share my research with as many people as possible. With ever-increasing value combined with speculation, there are many ways we can win together with ABC (ai + blockchain + cloud) trio. Knowledge is power!


More to Explore

Want a Free DeFi eBook Delivered To Your Inbox?

Enter your email address below to get a FREE eBook "DeFi: The Ultimate Beginner's Guideand signed up for exclusive news letter.
You'll also enter into a random drawing to get Free access to a brand-new "The Crypto Code" Mastermind [$1,997 In Value ] in 2024 giveaway.
DOWNLOAD NOW!
close-link