
TechArena host Allyson Klein talks with Lyssn co-founder Zac Imel about how his company intends to change the shape of mental health using artificial intelligence.

TechArena host Allyson Klein talks with Ampere Chief Product Officer Jeff Wittich on the rise of Ampere fueled computing in the cloud and why Ampere's lineup places it in an excellent position for the next wave of cloud growth.

It’s hard to believe that cloud computing is turning 25 this year likely signing up for its first 401K and starting to check its credit score. It was in 1997 that Emory University Professor Ramnath Chellapa defined cloud computing as a “computing paradigm, where the boundaries of computing will be determined by economic rationale, rather than technical limits alone.” Soon after Amazon introduced retail-based services, and in 2006 both Amazon and Google made what would be historic moves with the introduction of AWS and Google Docs services. Netflix followed the next year followed by NASA’s OpenNebula in 2008 (remember that?), and enterprises woke up to the fact that their infrastructure was becoming antiquated. With the economic uncertainty facing many organizations in the financial crisis of 2009, new technology alternatives that previously might have seemed risky were now seen as opportunities to build new agility into IT organizations. Private clouds were born, and the Arthurian quest for the perfect hybrid cloud environment was kicked off.
Today we all know the value of cloud. Even people who don’t obsess about tech know the value of cloud services to our lives and how they have fundamentally transformed societal function. We also are all acutely aware of how the past two years of pandemic would have been exponentially worse if not for this technology. As we enter into the post-pandemic world, I wanted to delve deep into where the cloud stands today. Over the following weeks I’ll be sharing opinions on where we are at across infrastructure, software stacks, security, and services from some of the industry’s brightest minds, and we’ll uncover a view on what promising innovations will drive cloud capability forward into the next quarter century of advancement.
I’m kicking off the series with a conversation with Ampere chief product officer Jeff Wittich on the foundational future of cloud processing requirements and why Ampere has built its products from the ground up for cloud workloads. In its fifth year of existence, Ampere has gone from silicon dream to deployment reality at some of the largest cloud service providers on the planet and proven that new architectures (even those not designed in house) can thrive in cloud environments. Jeff also shares his view that we’ve reached another macro environment where economic uncertainty opens the door to new avenues for technology innovation with Ampere products delivering new opportunity for performance and cost efficiency. I invite you all to check out the interview, subscribe to get the entire series, and reach out to me and my guests to continue the dialogue. Thanks for engaging - Allyson

It was incredible to be back at a full-fledged in person SC’22 in Dallas this week. After two years of pandemic-limited interaction, the conference felt vibrant and essential to the sharing of ideas and innovation. I’m back in Portland and reflecting on the advances the largest research institutions have made in the past year with new entries in the Top500, a heightened focus on research collaboration spurred by a period of acute scientific demand from humanity, and a hope for additional collaboration from the industry towards new heterogeneous systems to fuel the proliferation of Exascale computing and beyond. Farther afield, I’m keeping my eye on the advancement of chiplet architectures and how they’ll shape future systems.
Some quick takes from me on the silicon front. Yes, we’re seeing advancement by AMD taking the top spot with the Frontier system and inclusion in over 20% of the newest list of top supercomputers. This was expected, but for me the real story to watch in the coming year is the advancement of heterogeneous systems powered by CXL providing more flexibility in design for matrix and vector processing requirements. The answer is no longer which silicon but what compliment of silicon to provide the flexibility required for diverse HPC workloads. We also saw the announcement of the UCIe 1.0 specification providing an industry standard chiplet interconnect. We’ve talked about chiplets for a while now, but with support from all the major logic vendors AND many of the major cloud providers and integration with CXL for near term volume attach I am anticipating to see some vendor news on integration of UCIe into future products soon. The net net? The customer wins with more flexibility of silicon choice for computing needs and industry innovation accelerates with a standards-based playing field.
Then there’s data. The takeaway is that researchers have a lot of it and need to manage it. I published my discussion with Jeff Denworth, co-founder of Vast Data, on their new universal storage solutions, all flash NAS that creates an efficient and scalable storage alternative. Jeff thinks this will disrupt the memory storage paradigm, and we already know that with CXL invading platforms we’ll see “far memory” designs creating new opportunity for lower latency data delivery as well. In Turing award winner Jack Dongarra’s lecture at the conference, he laid out that this is the bottleneck for HPC systems today which is why I was equally intrigued to see the advancements in the IO500 systems as I was for the Top500. The IO500 organization is publishing interesting data on not only what systems are delivering best bandwidth, metadata performance, and overall performance, they provide a cross-section of which storage platforms were submitted for analysis (with Lustre being the predominant class of storage system for this report). If you’re not familiar yet with IO500, I’d encourage you to dig into the results and review the presentation that they delivered at SC’22.
Finally, there’s the research itself, and this is what makes SuperComputing such an inspirational conference. To hear directly from scientists on the challenges they’re solving with the help of supercomputing is always impressive. One example was Karissa Sabonmatsu’s discussion on her institute at Los Alamos’ progress in unlocking genomes at the atomic level. She described the holy grail of cell level research as studying a single human cell for ten days and requiring 1012 Yottaflops of compute power. The complexity? A single gene represents over a billion atoms, and measuring molecular dynamics for a gene requires > 100 million calculations per second. Sabonmatsu is famous for her study of Ribosomes, those biological elements that connect mRNA and tRNA to synthesize polypeptides and proteins and are central to understanding how living systems operate as well as how drug and vaccine therapies work. The ribosome is a central player in how COVID-19 vaccinations protect us from the virus, and its continued study (and the underlying compute innovation required to continue unlocking it) will assist with creation of other therapies to combat a myriad of diseases.
We also heard from NASA about their research in air pollution and its effect on the planet. My discussion with NASA researcher Megan Damon provided insight in how their supercomputing center is furthering our understanding of the human and natural contributors to air pollution, how these aerosols and particulates travel across the globe, and how they contribute to climate change and human health. One in eight pre-mature deaths are partially attributed to air quality today, so the impact of this research will help us better understand the interrelation between what’s in our air and how we can mitigate impact on humanity. Again, computing has a main role to play in delivering insight to fuel scientists working on this study.
And that’s a wrap from SC’22! Thanks for engaging - Allyson

Tech Arena host Allyson Klein chats with NASA’s Megan Damon on her group’s study of global air pollutants and how supercomputing helps speed insight to global climate change and human welfare.

A good part of the industry is focused on how to contend with data. We all know that we’re creating more of it at an eye-opening rate, but harnessing data for positive organizational or societal value is a non-trivial exercise. Entire industries have been built searching for this holy grail, and as the era of AI dawns upon us our desire to bring larger data sets to bear to solve our largest problems has grown.
There’s no better place to consider data architectures than Supercomputing. This community of the largest compute clusters on the planet knows a thing or two about data at scale, and I was confident that there would be innovation on display in the halls of SC’22. Last night’s plenary session featured some of the leading minds in scientific computing today (more on that in a future post), and one observation that really sunk in was that scientific discovery has fundamentally shifted from finding enough scientists to gather data to focusing scientists to infer the correct correlations from the mountains of data that we have. Industry tools to make this easier on the scientific community would offer the opportunity to more easily curate data and therefore speed insights. This observation in the realm of HPC easily extrapolates to the private sector from enterprise to the largest cloud providers.
I got the distinct pleasure to speak to Jeff Denworth, co-founder and CMO of Vast Data about their innovative approach to data storage…what they call Universal Storage. The Vastronauts have been getting a lot of attention of late for delivering all flash storage solutions that they claim disrupts traditional storage paradigms and provides an easier path for managing large data pools. Garnter placed Vast squarely on their magic quadrant, CRN named them the emerging player of the year…and today HPCWire recognized them for their ascent as a key provider for the HPC arena.
Check out my chat with Jeff where he shares more about the architecture and where the Vast team is seeing deployment interest. Jeff spoke about his background in Lustre and the fact that the Vast solution turns away from more complicated storage topologies with a NAS that is extremely well designed. Aha, THAT simplicity, efficiency and scale are absolutely going to grab attention. I hope you enjoy the discussion. Thanks for engaging. - Allyson

Allyson chats with Vast Data co-founder and CMO Jeff Denworth about Universal Storage and why it aims to disrupt traditional data paradigms.

Turing Award Winner Jack Dongarra Shines a Light on the State of Supercomputing
Today at SC’22 the Turing Award winner, Jack Dongarra of the University of Tennessee, provided a retrospective on high performance computing from its early days to opportunities of the Exascale era. A note on the Turing Award. Named after Alan Turing and often considered a Nobel equivalent for computing, the award goes to a scientist each year who has contributed to the field of computing advancement and comes with a $1,000,000 prize. Past recipients have delivered inventions like the UNIX operating system (Ken Thompson and Dennis Ritchie in 1983), TCP/IP (Vint Cerf and Bob Kahn in 2004), compiler and automatic parallel execution (Frances Allen, the first woman awarded the Turing in 2006), and design of computer architectures (David Patterson and John Hennessy in 2017). These are people who can hush rooms when speaking, and this year’s award recipient is no different.
One could argue that the HPC field and SuperComputing conference itself would be vastly different without Jack Dongarra’s contributions. His continuous investment in eras of HPC development have enabled the scientific community to both maximize the value of infrastructure and measure infrastructure performance. It’s this last bit that offered a unique lesson for those listening to Jack’s talk today. It’s the story of how the Top 500 came into being and really how we have standard benchmarks for everything the computing industry delivers. This, for someone like me who has been in this industry for a minute or two, is like uncovering why water is wet. We care deeply for advancements in performance, and things like benchmarks and the Top 500 give us the fundamental tools to measure advancements in a fair and consistent manner.
So how did this happen? Jack, as one of the founders of Linpack, wrote a simple table to measure a set of complex equations to measure relative performance of systems back in the 1980s. He began maintaining a list of relative performance, and in 1993 merged his efforts with two other scientists’ similar pursuits. The Top 500 was born with a system from Los Alamos Labs at the top measuring a total of 1000 processors busy simulating nuclear warheads. Since then labs vie for top spots on the Top 500, vendors scramble to ensure their infrastructure is featured prominently, and the entire computing industry is pushed forward reaching for more performance to fuel these massive machines. This is a great example of the human motivation driven in part by standard metrics, and we all have to thank Jack and his colleagues for this bar.
So what has happened since the introduction of the Top500? Jack spoke about waves of computing architectural transformations, from shared memory systems in the ‘90s to distributed memory systems of the 2000’s to the introduction of multicore and hybrid architectures in the 2010’s and today the era of exascale fueled by the merger of HPC and AI, the evolution of heterogeneous platforms and ultimately the performance at Exascale (or a billion billion flops). The Exascale Project, with multi-billion dollar funding by the US government, has focused on 21 applications across the realms of scientific exploration with a common theme of 3D model simulation, all benefited by underlying infrastructure advancements and software optimization efforts and contributions from Dr. Dongarra. We’ll continue to see the secrets of the universe unlocked at more rapid rates due to the collective contributions of this HPC community, and for that I’m incredibly grateful.
Still, there are some challenges ahead. The Top 500 has also become a geo-political hot potato. With China holding the pole position with over 160 supercomputers on the list and the US in second position with 125 there is an existential reality that those who can unlock these secrets have power of first action. In fact, there is belief that China holds two Exascale class machines that it has not submitted to the Top 500 list in this era of microprocessor power becoming a strategic national security asset. One need not look farther than recent history of China and technology sanctions and the US CHIPS Act to understand the value of microprocessors to national security interests, and the nexus for this challenge is playing out on the Top 500 list. And in reflecting on this, I can only think back to yesterday’s plenary session at the conference where leaders of supercomputing from ORNL and NIAID called for a new era of collaborative research to solve our most pressing problems. This open and collaborative approach proved to be critical most recently to unlocking the COVID 19 virus and has been a cornerstone of computer aided research since its inception. In this way, the Top500 is providing a different kind of transparent measurement, one that shines a light on how geopolitical concerns can stand in the way of societal advancement. I’m hoping that the spirit of Jack Dongarra and his fellow Turing Award winners prevail to all of our benefit. Thanks for engaging. - Allyson

Welcome to the Tech Arena, a new media platform delivering authentic conversations with the leading technology innovators. As a veteran of the industry, having worked on innovations as far reaching as cloud computing, 5G networks, and AI, I have heard a lot of stories from companies inventing technology and those tapping this technology to create business opportunity or societal advancement. I’m a great believer that technology is the foundational change agent in society today, so the work being delivered across the tech arena is inspirational. The stories we hear, however, can sometimes be pigeonholed into overused hero vs villain archetypes or perhaps stories that are uniquely focused on a subset of our true community. There are other voices out there just waiting to be heard.
I left my corporate career after leading organizations at some of the largest names in tech to launch the TechArena because I strongly believe that we’re all seeking a different dialogue. One that isn’t afraid to highlight stories from both tech titans and scrappy disruptors delivering something that will push us all forward. We’ll roll up our sleeves and go two clicks down to not just discuss the what but also the how and why of an innovator’s aim and intention. And you’ll hear a broader array of voices on this channel…yes, the women of tech including yours truly, but also a diversity of technology roles, teams from locations across the globe, and stories that may not have be grabbing the headlines but have the opportunity to push us forward to the benefit of all.
There was no better place to launch the TechArena for me than Supercomputing 2022. This annual conference features not only the largest scale computing on the planet, it is also a nexus for the scientific community to congregate to discuss what insights were delivered to us in the past year due to the contributions of technology and what they collectively need from the tech industry to reach those next exploration breakthroughs. Today’s targets focus on unlocking the foundations of life, mitigating climate impact and exploring the farthest reaches of the galaxy. This week, I’m looking forward to hearing more about infrastructure advancements to address the convergence of HPC and AI, how we’ll meet the growing challenge of energy efficiency for large scale compute clusters, and how we’ll continue to feed larger and larger models for research team analysis.
So check out the podcasts, blogs and content drops, please follow the TechArena on Twitter, LinkedIn and other social sites, and subscribe to our feeds on Soundcloud, Substack or wherever you get your content. And I’d love to hear from you with feedback on this program, interest in collaborating, and suggestions about who you’d like to see in the arena. Thanks for engaging. - Allyson

Allyson steps into the arena with British physicist Jess Wade to discuss her advocacy for under-represented STEM recognition through Wikipedia biography publication.

To be seen is a first step to true inclusion, and in 2022 I’m regularly struck by how we still collectively struggle to see one another and our respective contributions to our respective fields. This is a primary reason when founding the TechArena that I decided to establish a platform for stories that help gain more collective appreciation for individuals and teams who may be otherwise overlooked. It’s also why I was strongly drawn to the story of Jess Wade, a physicist at Imperial College London.
To say that Jess is a badass is an understatement. At the Blackett Laboratory, her research focuses on polymer-based organic light emitting diodes…or the technology that fuels the displays that we spend our lives staring at from smartphones to televisions. At thirty-four, she’s well published and on her way to an extraordinary career of contribution like many Imperial College scientists who came before her. But what’s interesting about this story is that Jess was brought to the TechArena and to the world’s attention for what she does in her off hours.
Jess has always been an advocate for inclusion having learned early in life that the STEM field was narrowly represented based on gender, race, and socioeconomic privilege. A few years ago, she decided to dial up her efforts by addressing the recognition gap, or more specifically, the missing stories of minority scientists from Wikipedia biographies. Today, women make up only nineteen percent of all biographies on Wikipedia, and the numbers for STEM related biographies are even worse. Jess decided to do something about this and dedicated herself to writing a Wikipedia biography every day to shine a light on the incredible scientific contributions that were made by people who, for whatever reason, were not seen fully for their research. Some interesting things have happened since that you can learn more about in our chat. Jess has also garnered some individual attention for her work winning a bevy of awards including being named one of Nature magazine’s 10 people who mattered in science in 2018 and winning the British Empire Medal in 2019.
While Jess shared a lot of great insight during our discussion, the thing that made me decide to bring this story to my Supercomputing publications was her observation that “we're really not designing the best tech solutions to all of these huge global challenges if we're only selecting our big coders or our problem solvers from a handful of the population.” This is not solely a feel-good story about delivering recognition, it’s a story of how we collectively reach farther and better to new insight and discovery of our most pressing global challenges lifted up by all of our collective perspectives. I was heartened to see a wealth of diverse technologists and scientists from a broad range of fields represented in SC’22 agenda and am looking forward to seeing if this is reflected in the conference proceedings as a whole. If you’d like to share your perspective on this important topic please connect on Twitter or LinkedIn.