
Predicting the future of technology is always a bit of a gamble—especially in the fast-moving world of data centers. After all, we’ve all seen how quickly things can change, and the internet never forgets a missed prediction.
Still, as digital transformation continues to reshape industries worldwide, there’s tremendous value in looking ahead and trying to map out where things might be going. So, in the spirit of taking a calculated risk, I’m sharing my thoughts on three major trends I expect to define the data center space in 2025.
Spoiler alert: Data centers could soon do far more than just power the digital world.
1. Data Centers Will Shift from a Twin-Transition to a Triple Transition
The idea of a "twin transition" for data centers—digital transformation and sustainability—has been a major talking point over the last few years. The growing demand for advanced computing, driven by cloud computing, AI, IoT, and automation, has forced data centers to adopt digital transformation capacity strategies. Simultaneously, the pressure to reduce energy consumption, adopt renewable energy, and implement efficient cooling has driven a sustainability transition.
However, with the rapid rise of artificial intelligence (AI), we’re looking at the emergence of a third transition. AI brings its own set of unique challenges that go beyond digital transformation or sustainability alone. First, AI workloads demand specialized hardware like GPUs and TPUs, driving up power, density, and cooling requirements significantly. Second, because AI’s energy consumption can be enormous, achieving sustainability goals while supporting AI’s growth will be a tough balancing act. Finally, AI will demand new infrastructure, new expertise, and entirely new business models to support its operations. By 2025, this triple transition—digital transformation, sustainability, and AI integration—will be in full swing, reshaping the entire data center landscape.
2. OCP Adoption Will Accelerate, and Its Ecosystem Will Evolve
As the data center industry faces this triple transition, the need for flexible, modular, and energy-efficient solutions has never been clearer. Enter the Open Compute Project Foundation (OCP). Originally developed by Facebook (now Meta) and supported by the likes of Microsoft, Google, and Amazon, OCP has already proven itself as a robust framework for addressing the demands of digital transformation and sustainability.
What’s exciting for 2025 is the increased adoption of OCP, not just by the tech giants, but also by tier-2 industry players. Leading manufacturers like Nvidia, Dell, and Supermicro have already jumped on the OCP bandwagon, recognizing its potential to meet AI-driven demands without compromising energy efficiency. OCP’s power-efficient designs, which provide power management at the rack level, and its readiness for liquid cooling make it a compelling choice for data center operators trying to balance high performance and low environmental impact.
In 2025, OCP will continue to grow into the cornerstone of data center infrastructure, extending beyond the major cloud players to become a common standard for all types of data centers. The modular nature of OCP means that as AI workloads grow, data centers can scale quickly and efficiently, deploying custom solutions with minimal vendor lock-in. With its proven track record, OCP will play a pivotal role in helping data centers meet the demands of the digital, sustainable, and AI-powered future.
3. Data Centers Will Emerge as Energy Providers and Critical Heat Sources
Data centers have long been known for their massive energy consumption, but by 2025, they could also play a pivotal role as active participants in the energy ecosystem. This shift is driven in part by the growing reliance on renewable energy, which can be intermittent. Hyperscalers—those massive cloud operators like Google, Microsoft, and Amazon—are already beginning to explore options to ensure reliable power supplies. Some are even considering restarting nuclear power plants to meet their growing energy demands, recognizing that clean, consistent power is key to their long-term operations.
In addition to these initiatives, we’re seeing an increasing interest in microgrids powered by Small Modular Reactors (SMRs), which could be deployed to directly power data centers. These small, efficient nuclear reactors could provide the stable, low-carbon energy needed to run data centers around the clock, especially in regions where renewable sources are less reliable. With the ability to provide a dedicated, resilient energy source, SMRs could alleviate pressure on the grid while providing the high energy densities required for AI workloads and other high-performance computing needs. As data centers become more energy-critical, this could lead to them not only drawing power but also supplying it back to the grid during peak demand.
But the energy benefits don’t stop at power supply. Data centers also produce substantial amounts of excess heat, which could be repurposed for local needs. In colder climates, this waste heat could help warm nearby homes or even support industrial processes that require high temperatures. By redirecting this heat, data centers will not only reduce reliance on fossil fuels for heating but also become integrated into local energy ecosystems, boosting overall efficiency. Or as one speaker simply put it: “Data centers will give every Kwh a second life.”
As a result, the location of data centers may shift to areas where both power supply and heat demand are high, making them critical players in the future of energy grids. Data centers will evolve from simple digital infrastructure to essential energy assets, playing a key role in urban sustainability.
Looking Ahead: Data Centers as Essential Urban Assets
Now, before anyone gets too excited about the idea of a data center on every block, let’s be honest: nobody really wants a data center in their backyard. They’re noisy, energy-intensive, and often not the most visually appealing structures. But here’s the thing—just like no one wants to throw away their phone or PC because they’ve become essential to our daily lives, we’re starting to realize that data centers are becoming just as indispensable. As our reliance on digital infrastructure grows, it’s time to rethink where data centers are located and how they can better fit into residential and urban landscapes–making them quieter, more visually appealing, and seamlessly aligned with their roles in energy distribution and urban sustainability.
In conclusion, the data center industry is on the verge of a major transformation. The next few years will see data centers evolve from isolated tech hubs into integrated energy and infrastructure players. OCP adoption will have accelerated, making energy-efficient, modular designs the standard. The triple transition of digital, sustainable, and AI-driven infrastructure will drive these changes, and data centers will not just store data, but also play an active role in stabilizing the power grid and supporting urban heat networks. The future of data centers is bright, and their influence will only grow as we move further into the digital age.

While AMD has been consistent in recognizing the new demands of AI-enabled applications, the company remains steadfast in ensuring that AMD EPYCTM processors continue to offer leading performance for traditional compute workloads, such as HPC, database, cloud native applications, collaboration systems, finance, and more.
I recently caught up with Ravi Kuppuswamy, AMD Senior Vice President of Server Product & Engineering, to explore the company’s approach to the evolving landscape of enterprise workloads, hyperscale innovation, and the growing influence of AI.
Traditional compute applications are also adapting and adding elements of AI into their application environment, he said.
“In a wide array of apps from Microsoft, Oracle, SAP, we see them adding AI-enhanced tools such as recommendation engines, chatbots, into their application,” he said. “While massive AI models are indeed a significant step…the vast majority of real world applications still are more evolutionary and focused on general compute.”
This dual focus allows AMD to serve diverse customer needs, ensuring that cutting-edge AI capabilities don’t overshadow the ongoing importance of reliable, efficient traditional computing.
A Portfolio Built for Versatility
AMD's diverse portfolio spans CPUs, GPUs, AI NICs, and more, offering flexibility for a wide range of customer requirements. Kuppuswamy described this strategy as “letting customer needs guide the discussion,” highlighting how AMD supports everything from cost-effective solutions to high-performance configurations.
For workloads requiring heavy training or models exceeding 13 billion parameters, AMD’s CPU-GPU combinations, such as the recently launched MI300 series, provide the scalability and efficiency necessary for advanced AI applications. This approach ensures that customers can select solutions tailored to their specific operational goals and budgets.
Hyperscale Design and Energy Efficiency
During the OCP Summit, hyperscale configurations took center stage. Kuppuswamy explained how AMD collaborates with customers to design systems optimized for evolving data center demands. The focus on energy-efficient design is critical, as global technology-related energy consumption rises in tandem with increasing data generation.
AMD’s commitment to open standards plays a significant role in these efforts. By embracing interoperability, AMD fosters innovation that benefits hyperscalers as well as enterprises looking to leverage cutting-edge technology without proprietary limitations.
The Enterprise and Cloud Continuum
Enterprises are increasingly adopting hybrid models that combine on-premises and cloud computing. Kuppuswamy highlighted how AMD technologies enable customers to build robust on-premises infrastructures while seamlessly scaling to the cloud when demand spikes.
This flexibility is especially valuable for enterprises that lack the resources of hyperscalers.
Impact and Future Vision
AMD’s leadership in the data center market has grown significantly, with a remarkable rise in market share from less than 1% to 34% in recent years. This growth underscores the appeal of AMD’s energy-efficient solutions and customer-first approach.
Looking ahead to 2025, Kuppuswamy anticipates a wave of IT infrastructure upgrades driven by outdated systems nearing the end of their lifecycles. He highlighted the dramatic efficiency improvements offered by the latest generation of AMD EPYC processors: replacing 1,000 four-year-old CPUs with just 131 new-generation processors delivers the same workload performance, with significantly reduced power and space requirements.
Collaboration and Open Standards
One of the most surprising announcements at the OCP Summit was the launch of the x86 Ecosystem Advisory Group, a collaboration between AMD and its key competitors. The initiative aims to establish common standards for compatibility and interoperability, reflecting the company’s commitment to open ecosystems.
So what’s the TechArena take? As data becomes increasingly distributed across edge and cloud environments, AMD solutions empower customers to extract value from this continuum. From the high-performance EPYC 9000 series for data centers to Ryzen-powered endpoints, AMD offers a comprehensive portfolio designed for efficiency and scalability. This adaptability is critical in a world where businesses and consumers demand instant access to data and services.
Tune in to our Data Insights podcast with Kuppuswamy. For those seeking more insights into AMD data center technologies, Kuppuswamy encouraged audiences to explore resources on their website and social media platforms.

2025 will be a year of rising costs, changing focus, and increased cybersecurity challenges.
Cybersecurity is driven by a nearly complete transformation to a digital economy. This transformation exposes unprepared organizations to a range of malicious actors.
Organizations around the globe are struggling to adequately protect their sensitive data and systems. The rising costs of cybersecurity measures, including cybersecurity SMEs, has hindered their ability to effectively mitigate risks. Moreover, a growing compliance landscape diverts resources from meaningful cybersecurity initiatives. As a result, organizations are vulnerable to a range of cyberattacks.
While 2025 may bring more of the same in terms of cybersecurity failures, I see 10 clear cybersecurity trends that organizations worldwide are likely to face:
2025 Cybersecurity Forecast: More Threats, More Challenges
Stay Informed
In 2025, organizations will continue to struggle to adequately protect their sensitive data and systems, but new threats are taking focus. The best way to mitigate these threats is to stay informed about emergent tech trends, so you can make informed decisions about organizational priorities.

In this video from SC24, experts discuss how the world’s largest SSD will transform their AI and HPC workloads, enabling faster access and accelerating growth while reducing data center footprints.

Remember how hot it has been these past two years? We all have had weeks or months of sweltering heat and trying to stay cool at work or at home. However, if you were fortunate enough to work in a data center, then you had it made. Just think of being inside a data center when it is running at its perfect temperature, between 73o and 75o F? How cool would that be?! However, that coolness comes with a big environmental impact.
Data centers are water guzzlers! On average, it takes about 0.48 gallons (1.8 liters) of water to cool just one kilowatt-hour (kWh) of electricity in a datacenter. In 2022, U.S. data centers consumed approximately 200 terawatt-hours (TWh) of electricity, or about 4% of the total U.S. electricity demand. And guess what? This figure is forecasted to rise significantly in the coming years due to the growing adoption of 5G networks, cloud-based services, and AI technologies. In addition, the data center construction market in the U.S. is estimated to grow at a compound annual growth rate of 6.6% from now to 2030. Finally, let’s not forget that data centers indirectly consume water through the electricity they use, as power plants need water for cooling during electricity generation. In the U.S., it takes approximately 2 gallons of water to generate 1 kWh of electricity.
The Path Forward
As you can imagine, the key challenges in building and operating sustainable data centers revolve around managing water and energy consumption, adopting innovative cooling techniques, handling e-waste, ensuring supply chain sustainability, and complying with regulatory requirements. These methods not only improve cooling efficiency, but also help reduce energy consumption and environmental impact. Traditional methods like Computer Room Air Conditioners (CRAC) and Computer Room Air Handlers (CRAH) are still widely used and account for 80% of all data center cooling techniques, underscoring the need for new thinking. Other techniques include:
Moreover, industry must focus on renewable energy sources to power data centers. By integrating solar, wind, and other renewable energy sources, data centers can reduce their reliance on fossil fuels and minimize their carbon footprint.
Unfortunately, data centers do contribute to water insecurity, particularly in regions where water resources are already scarce. The emphasis on sustainability and environmental impact will lead to more strategic and environmentally conscious decisions regarding the locations of future data centers. So, the next time you run a search for a cat video, think of the amount of water that was consumed to feed your thirst for fun.

In this podcast, MLCommons President Peter Mattson discusses their just-released AILuminate benchmark, AI safety, and how global collaboration is driving trust and innovation in AI deployment.

When compared to other industries, such as consumer electronics, the automotive industry moves relatively slowly. Historically, for the incumbent automotive OEMs, there has been a five-year development cycle for new vehicle introduction and a two-year cycle for limited platform upgrades. Initiatives like Software Defined Vehicles (SDV) are focused on addressing these long development cycles with more efficient use of R&D and reuse of vehicle platforms; however, in general, for the broader market, the SDV has not yet arrived. That said, emerging Chinese automotive OEMs who have been able to design EVs starting from a clean sheet of paper appear to be able to reduce the design cycle from five years to two years.
So looking at the crystal ball to predict the future of automotive, specifically in 2025, I predict we will see to see mostly incremental changes that align with broader industry megatrends, sometimes referred to as CASE - which stands for:
Unpacking the underlying details of each of the elements of CASE will help to understand the incremental progress that will be made in the 2025 timeframe. Before I discuss each of these specific megatrends and share my thoughts on what incremental announcements we can expect to see, I want to summarize some broader industry trends that I expect we will also see.
OEMs will be even more aggressively spinning back in engineering teams and embarking on their own ASIC designs in an attempt to develop truly unique, differentiated solutions vs. those from merchant semiconductor suppliers. However, the automotive industry still relies heavily on semiconductor and sensor solutions from third-party suppliers. Because of this dependency, the OEMs provide clear signals and direction to those suppliers well in advance to ensure solutions that are aligned with OEM platform requirements and automotive platform development/deployment timelines. With such clear guidance being signaled to the solution suppliers well in advance of the actual need, predicting the future with some level of accuracy can be relatively straightforward.
It is almost a certainty that at the 2025 CES show, now one of the largest automotive shows, there will be many new concepts that will be on display, which in many cases will be an attempt to throw something on the wall to see what sticks. It is also almost a certainty that dependency on AI and the integration of the technology will see exponential growth, where it will be used to provide a more tailored customer and passenger experience in addition to addressing new capabilities like virtual reality which will help further improve automotive safety.
What is also clear is that in 2025, there will be a reconciliation and reassessment of semiconductor players that reconsider/exit the automotive industry as the Gartner Hype Cycle for autonomous vehicles has transitioned from being at the Peak of Inflated Expectations in 2016 to now entering the Trough of Disillusionment in 2025. Typically, there is significant over-investment at The Peak of Inflated Expectations which then gets reigned in once the market enters The Trough of Disillusionment – Intel’s relatively recent spinoff of Mobileye is a good example of what can be expected going forward as the true market size and the lengthy time to revenue for automotive and ADAS especially comes into focus.
Semiconductor startups with a pure-play focus on autonomous vehicles will more than likely get acquired, go out of business, or pivot to focus on other markets. The costs of semiconductor development are significant, and while innovative solutions can yield differentiated results, automotive OEMs tend to be risk-averse and reluctant to embrace new semiconductor suppliers in critical application areas. The very long time to revenue, for ADAS and automotive in general, will eventually cause venture capitalists looking for large returns in a short period to lose their patience and exit the ADAS pure-play semiconductor companies.
Another broader trend is that software competence, along with AI, is becoming a critically important differentiating factor for ADAS, connectivity, and in-vehicle experience. Historically, auto OEMs have been referred to as “metal benders” where most of the other elements of the vehicle were addressed by third parties. As today’s leading vehicle contains 100 million lines of code – going to 1 billion lines of code by the next decade – we can expect that in 2025 and beyond, a great deal of focus is going to be placed on developing software competencies that not only address the operational domains of the vehicle but also support the new mobility ecosystems driven by consumer trends and demands. Software engineering can easily prove to be a bottleneck that will need to be addressed.
Connected
The connected tenant of CASE does a lot of heavy lifting, as many different key attributes of the vehicle are captured in this trend. To pause and give a feel for the importance of connectivity in the automobile, consider how anxious you feel when your internet connection is down, or how useless your tablet would be if it had no support for internet connectivity. Going forward, without connectivity, the vehicle will feel just as useless as the unconnected tablet does today. Some of the fast-growing, underlying trends that will necessitate a network connection include:
The analogy of the tablet (or perhaps smart TV) is also very analogous to the automobile in the manner that multiple high-resolution displays will be found throughout the vehicle that will support and provide an immersive digital experience reliant on connected service to deliver personalized content based upon user context awareness. In 2025, I predict you will see more and larger displays in vehicles being announced with many touting 8K resolution.
I also expect more OEMs will announce V2x, which enables vehicle-to-vehicle and vehicle-to-infrastructure communication, greatly enhancing vehicle safety. This increase in vehicles supporting V2x will be driven both by OEMs citing safety-consciousness and also by NCAP (New Car Assessment Program), a collection of similar government regulations instituted across many countries to specify which technologies must be deployed in a vehicle model on an annual basis to receive a high safety rating. In time, just like seat belts, airbags, or rear-looking cameras, these safety features will ultimately become mandatory in new cars.
In 2025, I expect more OEMs to announce, or perhaps more appropriately, preannounce OTA support in conjunction with SDV which will be based on an underlying centralized E/E architecture (see blog). Cybersecurity will also be a big topic that will be discussed and addressed in 2025, including OEM and auto semiconductor supplier announcements to embrace the ISO 21434 cybersecurity standards (see blog). This will be essential as this fully connected platform will have multiple potential attack points that raise a great threat for cybersecurity hacking.
As mentioned earlier – expect to see more AI in the cabin allowing for greater tailoring to both the driver's and passenger's personal preferences. Also, expect greater integration of the vehicle with the consumer lifestyle – including smartwatches and smart homes. In many cases, these will be announcements with demonstration platforms, however mass deployments will most likely happen in the future. Because the consumer’s vehicle purchasing decision is increasingly driven by the capabilities enabled by the connected car and provides an opportunity for OEMs to sell lucrative after-market services/subscription-based services, I expect this space will see a lot of pre-announcements and “concepts” as, in this area, automotive OEMs have dire FOMO (fear of missing out).
Autonomous
The promise of fully self-driving cars for consumers is going to continue to see a lot of heat and light in 2025, however, practical deployments will continue to get pushed out in time for several reasons: technical feasibility, legislation, and end price.
To date, only one major OEM has been certified to be able to support Level 3 ADAS in the US - a far cry from Level 5 full autonomy. Level 3 still requires the driver to be engaged and ready to take over control of the vehicle while the car is in a “semi-autonomous” state. In the case of this one major OEM, there was a significant number of “ADAS disengagements” for given miles traveled – an indication that the ADAS system was overwhelmed, and a driver was required to step in. It’s also important to note that the current legislation in the US limits the roadways where level 3 can be deployed; these roadways are considered to be low risk/suitable for L3 capabilities. ADAS disengagements on low risk roadways are a reflection of the complexity of the AI problem and the fact that a combination of more AI training and more AI performance (TOPs - Tera Operations Per Second) is required to reduce ADAS disengagements.
While robotaxis today offer full autonomy, they operate in a geofenced area (limited roads and very controlled range) and have exorbitant costs driven by extremely high compute performance processors and sensors. Given their business model, robotaxis can eventually amortize those costs much more easily than the consumer-owned vehicle.
I predict announcements of higher-performance AI processors targeting L3 ADAS with integrated in-vehicle-infotainment features. This is in alignment with the move towards the centralized E/E architecture. Auto OEMs will also be announcing more vehicles with L2+ to L3 support – available “soon”. In support of ADAS, many of the key sensor technology manufacturers will be highlighting next-generation technologies that will allow for lower total system cost and higher accuracy. 3D radar will be a hot topic as will solid-state LIDAR. Also, expect to see demonstrations of ADAS cameras with night vision – directly challenging the need for LIDAR. The lower cost, higher accuracy sensors will play a key role in driving the viability of ADAS for the masses.
In-cabin driver monitoring systems (DMS) will also see a significant uptick for reasons similar to V2x - where NCAP is increasingly driving this as a mandatory feature. Legislation is currently being considered in the US, and has passed in the EU, that DMS should be a required feature regardless of ADAS level, to detect drowsy or intoxicated driving. We will see increasing support for occupancy detection systems - additional in-cabin cameras, where again, legislation is being considered to make this capability mandatory to avoid fatalities associated with forgotten children or pets due to heat exposure. These cameras in turn will serve a dual purpose and will also be integrated into the in-vehicle infotainment systems in support of gesture recognition where hand motions can be used to control functions in the car.
Shared
I predict we will see announcements of new entrants in the shared market space (transportation as a service) in addition to partnership announcements between applications providers as Gen X and Gen Z are showing trends toward moving away from vehicle ownership and getting their driver’s licenses at a much later age than previous generations. The cost-conscious generations opt for an Uber rather than taking on the costs associated with vehicle ownership. In general, the concept of having the flexibility to choose the best vehicle for a specific need on-demand via a smartphone is gaining strong popularity.
Furthermore, the aging population will gravitate towards robotaxis to continue to have freedom when they are no longer able to drive. Areas of high population densities will also gravitate toward subscription-based transportation eliminating the need to find parking and pay exorbitant parking fees. As proven demand continues to grow, you can expect to see more entrants in this market either through the introduction of new players or incumbent OEMs finding this market lucrative. To that point, Tesla has continued to signal the intent to enter the robotaxi / shared market for some time.
Also, expect to see announcements and demonstrations from OEMs offering brand consistency across robotaxi fleets – where the passenger’s personal preferences are downloaded into the robotaxi as soon as the vehicle has been hailed ensuring a seamless and consistent experience between any shared vehicle of a similar brand – similar to the continuity across different types of Apple products and platforms.
An interesting trend, which Mobileye has taken the lead in moving towards, is to pivot from being only a semiconductor solutions provider to the automotive OEMs to now getting into the shared transportation business directly – expecting to collect end revenues directly from shared riders. As the automotive semiconductor suppliers build more complex, complete solutions stacks, it is unclear if others will follow that lead, but I could see how it would become enticing. That’s a hard one to call for 2025.
Electric Vehicles
Lastly, I predict we’ll see more announcements and introductions of EVs. This is not only driven by EV mandates but increasing consumer preference. EVs also offer significantly lower barriers to entry in terms of R&D and manufacturing costs as compared to vehicles based on the Internal Combustion Engine (ICE). This has already led to many new market entrants, most notably Tesla, but increasingly from Chinese OEMs, as they see a way to penetrate the high-revenue, high-growth automotive market.
Key technology areas of focus and announcements will include:
These announcements will lead to lower costs, longer battery life, greater energy efficiency, and reduced range anxiety. Suffice it to say, that the EVs will also support the aforementioned Connectivity, Autonomous, and Shared trends and capabilities.
In short, the automobile, and the automotive industry, has already dramatically changed as we know it today. While many different acronyms exist that summarize the automotive industry megatrends, CASE works reasonably well. In 2025, we will see more incremental steps and announcements (and plenty of “pre-announcements”) and demonstrations that are aligned with realizing the vision of CASE.
One prediction I am 100% confident about is that I got some predictions wrong and missed others. Time for a new crystal ball.
.webp)
Do you hear that sound? The pitter patter of the second half of the decade marching towards us? As we ready ourselves for the close on the first half of this monumental decade, the team at TechArena has been busy exploring those things that have passed, and those yet to come.
As I researched this article, I read back on predictions made in late 2019 about the first half of the decade's advancements. While many predicted AI's continued march, no one called the behemoth that is generative AI and Chat GPT. Many predicted mass advancement of 5G technology, and while 5G proliferation has grown around the globe, we've debated whether its true benefits, such as network slicing and cloud native automation, have been fully delivered.
Others forecasted massive advancements in autonomous driving, and the reality today is far from the 2019 vision. A fearless few called out a rise in remote work. While no one in the mainstream had a global pandemic on their bingo cards, those targeting that trend certainly have much to be proud of.
With this in mind, we dip our toes into the second half of the decade and what's to come. We'll have a series of articles this month from various experts from across the tech landscape weighing in on their relative domains of expertise. To get us started, I'll offer my five top trends for 2025.
Want to hear more about 2025 and the second half of the decade? Watch this space as TechArena experts chime in over the coming weeks on their insights into what's next in tech. For now, I'd love to hear from you on LinkedIn about your views on what's coming in the new year.

In 2015, the now famous Jeep Cherokee cyberattack made the automotive community and car owners alike suddenly aware of the significant liabilities that could be posed by attacks on vehicles’ electronic control systems. In this breach, security researchers remotely accessed the Jeep and were able to gain control over the vehicle functions including steering and braking. They gained access to the vehicle through its entertainment system via the cellular connection responsible for internet services. And while a software patch was provided to address this vulnerability, this attack raised a heightened awareness of the vulnerabilities of the connected car. There have since been other attacks demonstrated with differing levels of severity from manufacturers including BMW, Corvette, Nissan, and Tesla.
With over 500 million connected vehicles on the road today and lines of software in the vehicle exploding from 150 million up to 1 billion by the end of this decade – increasing possible cyber attack points – cybersecurity is increasingly getting a greater industry focus. This is for good reason; just by surveying the different networks in the car and their impact on the control over the vehicle, it’s clear that there is a need to ensure a critical focus is placed on cybersecurity.
Automotive networks - LIN, CAN, FLEX-RAY, and Ethernet provide different forms of connectivity within the car. The different types of networks address the unique performance requirements of the different Electronic Control Units (ECUs). They also provide opportunities for cyber attacks. The ECUs themselves have direct control over various aspects of the vehicle, which includes:
● Engine Control
● Transmission Control
● Steering Control
● AirBag Control
● Braking Control
● Navigation Systems
As one can see, the motivations are very high to ensure robust defense against cyber security attacks.
And again, with the growth in lines of software in conjunction with the move to the Software-Defined Vehicle (SDV), Over-the-Air Updates (OTA) will be commonplace with every update holding a real risk for containing malware and for that malware to go undetected. Exposing a car to the Internet makes it vulnerable to cyber-attacks if software isn’t written properly, which could render the car unstable or dangerous.
In August 2021, the ISO 21434: 2021 international standard was introduced. This standard specifies the engineering requirements for vehicle cybersecurity with the intent to reduce the risk of cyberattacks by embedding cybersecurity best practices in the automotive industry. The focus is on the protection of automotive electronic systems, communication networks, control algorithms, software, users, and underlying data from malicious attacks, damage, unauthorized access, or manipulation.
It’s key to note, the standard does not specify how to implement cybersecurity solutions per se, it specifies best practices to be used in designing a system in a manner similar to Systematic fault coverage associated with the ISO 26262 functional safety standard. Systematic fault coverage doesn’t identify how to implement functional safety, but provides a methodology to ensure industry best practices for safety are used in the design, test, and verification of a device / system and software. Interestingly, the ISO 21434 specification is not a mandate but a recommendation. Per ISO 21434 specification; “automotive suppliers and OEMs should strongly consider integrating ISO 21434 into their current process.”
Functional Safety and Cybersecurity are interdependent. A vehicle cannot be safe if its behavior can’t be predicted or controlled in the desired manner. One of the first tasks in designing to address cybersecurity is to perform a Threat Agent Risk Assessment (TARA) which looks to prioritize specific areas that are critical and have high vulnerabilities to attack. Many of the modeling techniques employed in the defense industry are being employed in TARA. Suffice to say that the area of cybersecurity is very complex and could fill many pages without even scratching the surface on this topic.
Interestingly, as part of the modeling, there is a detailed review of the threat agents – which identifies the different parties that would have motivation to try and hack the connected vehicle. The list is quite long and ranges from car thieves whose motivations are quite clear, to radical activists who are looking for fame and glory and also includes the poorly trained employee who unintentionally designs in a threat. This modeling is then used to develop a cybersecurity strategy and plan.
In conjunction with TARA, there is a common exposure library (CEL) that is used to identify all the possible areas of exposures and vulnerabilities associated with the connected car. These include:
● WiFi
● Cellular Connection
● Bluetooth
● TPMS (Tire Pressure Monitoring System)
● OBD II (On-Board Diagnostics Port)
● USB
● EV Charging Port
● V2x (Wireless vehicle to vehicle and infrastructure connectivity.
Vehicle to “x” V2x, which is seeing different levels of adoption by geography, allows for vehicles to speak to one another and a smart city infrastructure through wireless connectivity that is loosely based on WiFi. Because vehicles can talk to one another, events like multi-car pile ups that typically happen in situations where there is poor roadway visibility can be avoided simply by communicating to cars approaching that the car in front is stopped. V2x is typically connected directly to the ADAS system, assuming control over the vehicle, specifically to address these types of situations. One can easily envision how dangerous it could be if this communications network got spoofed, again, underscoring the need for robust cybersecurity.
But it’s not just about gaining control over the vehicle's operation that presents a cyber security risk. There is a considerable amount of personal, confidential data that is now contained in the car – ranging from personal credit card payment information for EV charging, to biometrics data which are being collected as a means to use AI to tune the cabin to the driver’s and occupants’ desires. A recent podcast with Lin Sun Fa, CEO at Emobi – a supplier of digital infrastructure for a secure and seamless EV charging experience – shined a light on some of the different security challenges that exist in EV charging and the level of personal information that can potentially be accessed.
While I have barely scratched the surface on this complex topic and technology, it’s clear that the importance of cybersecurity cannot be overstated, and the importance only continues to grow as Software-Defined, connected vehicles with OTA and a massively growing code base become more common.

Learn how Graid is changing the game for AI and data-intensive workloads, breaking through bottlenecks and allowing users to maximize output from their HPC infrastructure.

Introducing a new technology into a workplace, even a change as minor as a user interface update, can feel like walking a once-known pathway with our shoes on the wrong feet.
I was given a choice recently: Continue evolving in a known routine from well-worn pathways of those who went before me, join the pack regressing into the basics that worked 20 years ago, or jump into a new solution space again.
It’s easier to be a passive observer and supplier than advisor, solution explorer and creator. However, a close mentor of mine said, after working with me on two major technology transformation efforts, “You’re at your best when you’re a bit scared.”
So, I decided to stare down fear and jump into a role leading Intel’s AI Center of Excellence. This will give me the chance to leverage my background in products and technology, while exploring answers to the question I’ve been asked by execs so often throughout my career: “What do we do about this thing?”
Artificial intelligence (AI) is at a very interesting stage of development, with some entertaining extremes. A customer told me once that AI is marketed as end-to-end, but in reality, it’s End 1 and End 2 with no middle at this point.
Think of End 1 as the AI race to the moon – the paper publishing, consultants, visionaries, human augmentation:
End 2 is humans training humans, humans training applications, ‘no programming required’:
Extreme naysayers may miss that AI is a building block in many of the products and services they already consume, from search engines and e-commerce sites to streaming services. There absolutely is a boom in new infrastructure from AI – this generations’ race to the moon – and it has been such a long time coming to fruition, given the field is mature enough to have dedicated university degree programs.
What excites me the most about new technology is seeking out the new twists formed from scarcity of ideal solutions to an overabundance of non-ideal components, combined with scrappy solution “MacGyvers.”
What challenges and what promise do you see in AI? Comment on LinkedIn to start a conversation.

This special In the Arena episode features co-host Robert Bielby and Emobi CEO Lin Sun Fa as they dive into EV security, advanced charging tech, and the future of automotive innovation.

We had the delight of sitting down with Bob Rogers, CEO and Co-founder of Oii.ai, to discuss leadership, what inspired his career in tech, what trends or emerging technologies he believes will significantly impact the industry, and more.
A veteran in tech and former chief data scientist at Intel, Bob is also an author and changemaker. His company’s AI-powered simulation platform, Optii, ensures that enterprises achieve their product availability service goals with the lowest capital requirements possible.
We learned that Bob is a surfer, an ex-soccer player, a physics nerd who fell in love with the power of computers early on – and a thoughtful leader who believes mutual respect and trust is key to inclusion. Read on to learn more about his journey in life and in tech:
1) Q: What is a phrase that most defines you as a leader?
A: I’m an ex-soccer player. That means I know how powerful a team in synergy can be, so I want to create big goals for the team and make sure everyone knows they have an important role to play in achieving those goals together.
2) Q: What inspired you to pursue a career in technology?
A: I was a physics nerd who discovered that it was easier to model complex systems (like ultra-massive black holes in other galaxies) on a computer, rather than to work out the equations the hard way! Once I got a taste of this power, there was no turning back. I also realized early on that it was a great way to get paid for doing work I enjoyed anyway.
3) Q: Share an example of a risk you took that helped shape your career path.
A: I left my physics postdoc at a respected research center to co-found a quantitative futures hedge fund, based on forecasting models I hadn’t developed yet. Looking back, I guess that was a risky move.
4) Q: Tell us about a job that you hated.
A: Haha. As a 12-year old, I was asked to pick “big” rocks out of a recently rototilled yard. The problem was, I didn’t know the threshold for what a “big” rock was. As the sizes got smaller, the number of rocks got larger and I spent a lot of time at it. In the end, the boss was irritated that I spent so much time on such a “simple” task. It was a great lesson about making sure everyone understands the definition of success before starting on something.
5) Q: What’s a pivotal challenge you faced? How did you navigate it?
A: Early in my career, I didn’t know the difference between a technology and a product. I built an amazing technology (a trading platform that had made money eight years in a row) – but when I tried to market it to institutional investors, it took me a year to realize that their specific needs weren’t served by the way I had structured the company. Great tech, no product. That lost year cost me a lot.
6) Q: Wow – can you share a bit about the lessons you learned about how to effectively productize?
A: Understand what your customer needs. It seems obvious that every investor wants high returns, but that's not enough. My institutional investors needed a futures-based investment vehicle that fit their compliance and legal requirements. That meant a very specific instrument called a "managed futures account," rather than an LP or other vehicle.
More generally, to learn what your customers need, listen to their storytelling, walk a mile in their shoes, and watch them work with your prototypes. You will see where you are on track and where you have drifted off course. Critically, it's not what they say they need, but what you observe they need that will guide you to success.
7) Q: Tell us about a moment you are most proud of in your career.
A: When I was Chief Data Scientist at Intel, I co-led an amazing program called “Intel Inside: Safer Children Outside.” We built AI automation models to help the National Center for Missing and Exploited Children (NCMEC) process millions of reports of online child exploitation. When we started, there was a 30-day backlog for processing reports and getting crucial information to the authorities. My proudest moment was two weeks after we deployed our AI: I learned that the NCMEC backlog had been completely cleared, and law enforcement was able to receive information the same day the crime was reported. The impact on child safety has been measurable.
8) Q: How do you approach building a company culture that encourages innovation and inclusivity?
I encourage respect and mutual trust among my team members and between them and me. Don’t second guess your team when they take risks, regardless of the outcome. At the same time, to ensure that environment of mutual respect and trust is supported, immediately call out (privately) anyone who breaks the trust and respect paradigm.
9) Q: What mindset do you want to instill throughout your team?
We are changing the world for the better. We might have some moments where we are working harder or longer than we’d like, but we’re doing it for a reason. Also, part of making the world a better place is ensuring that team members have the time and mental space to be present with their families, so these two things have to work hand-in-hand.
10) Q: What trends or emerging technologies do you believe will significantly impact your company or the industry?
AI is only at the beginning, and there will be a next generation of AI coming soon. Right now, GenAI doesn’t know anything… it just talks a good game. When AI knows things for real, and can take a position and reason on it, that will dramatically change the utility of AI in our work and our lives. I’m paying close attention to the tech and building for that future.
11) Q: How do you handle setbacks or failures, both personally and as a leader?
Just keep putting one foot in front of the other and present as positive an interface as possible. My first wife died suddenly at age 51. To help my children get through that, I just had to keep going every day and model resiliency for them. Setbacks in a startup require the same mindset.
12) Q: What attributes do you most admire in others that you wish you had?
Ability to clearly and immediately see the right action, take it, and communicate it, regardless of how angst-y it might be. This is especially true with personnel: You want to give people a chance to course correct, but your instincts are usually right, and the longer you wait to make a personnel change, the more you hurt everyone else in your organization.
13) Q: Name a role model or mentor and a key thing you learned from them.
Shahin Hedayat, my co-founder at Apixio and an incredibly successful startup CEO, is always firm, calm, and clear. He never leaves room for ambiguity and makes sure that every goal is well-defined. He can be firm and kind simultaneously.
14) Q: What practices or habits do you rely on to keep your mind sharp and your energy up amid the demands of being a CEO?
Exercise is crucial. I surf whenever I can, even though I’m a pretty rough surfer (in Hawaii they would probably call me a kook). Inland, swimming and low-key cycling are good substitutes.
15) Q: What advice would you give your 23-year-old self?
Ask more questions and listen. In school, you get the mistaken impression that what you know is the most important thing. In life, being able to ask questions, and knowing what you don’t know, are even more important. It took me a while to learn that, but when I did, it was a game-changer for me.
16) Q: On a personal note, what team(s) do you root for?
SF 49ers, SF Giants, Man City
17: Q: Where would people find you on a typical Saturday?
Home Depot
18) Q: What is the book that you most recommend?
A: You mean, other than my own books? Lol. I loved The Alchemist by Paulo Coelho. Think Again by Adam Grant, and How to Read Water by Tristan Gooley are also thought-provoking.
19) Q: What’s at the top of your playlist?
A: Anything hard rock.
20) Q: What superpower do you have that people don’t know about you?
I’m a “night thinker.” I’m pretty slow to figure things out during the day, but if I think about a problem before I go to sleep, I will often have the entire solution worked out when I wake up the next morning. It’s not 100% reliable, but it’s pretty handy when it works!

In this episode, Eric Kavanagh anticipates AI's evolving role in enterprise for 2025. He explores practical applications, the challenges of generative AI, future advancements in co-pilots and agents, and more.

Broadcom completed the $69 billion acquisition of VMware on November 22, 2023. In the one year since Broadcom’s acquisition of VMware, we’ve seen drastic pricing and packaging changes, a new channel strategy, as well as the restructuring of the internal and external community.
It makes you wonder, how have these changes affected IT strategy? And are there any examples of past acquisitions that can help us make sense of the changes? The Oracle acquisition of Sun Microsystems is a great place to look for lessons.
Sun’s acquisition by Oracle and its aftermath:
Sun Microsystems, founded in 1982, shares many similarities with VMware, particularly in their vibrant user communities. Sun's community, known as Big Admin, was highly informative, even though it was a bit intimidating to baby sysadmins like me. Sun's flagship event for Java developers was JavaOne.
Sun also played a pivotal role in laying the groundwork for today's cloud computing. At the forefront of UNIX adoption, Sun's SPARC hardware featured their successful early RISC processors. The Solaris software system, tailored for Sun hardware, was renowned for its stability and security. Sun introduced the world to the Java programming language, NFS, and the first building blocks of containers, unveiling the first container system, Solaris Zones, in 2004.
However, despite their early successes in the workstation market, Sun struggled to survive the burst of the dot-com bubble in 2000. As a sysadmin during that period, I saw our users (astrophysicists) favor x86 architecture and Linux for faster processing over SPARC architecture. Sun faced tough competition from vendors running open-source Linux on x86 systems.
In 2009, Oracle acquired Sun Microsystems for $7.4 billion, gaining access to their hardware, operating system, and the Java codebase. This acquisition also marked the loss of the Big Admin community.
Doesn’t that sound familiar?
Broadcom's Revenue Goals and Strategic Shifts:
Prior to the acquisition, Broadcom announced their goal for 70% of Annual Recurring Revenue to come from 600 key customers in highly regulated industries. To achieve this, Broadcom implemented several strategic pricing and packaging changes.
Initially, they shifted from offering perpetual licenses to software subscriptions. They forced adoption by removing support for existing licensing. Next, they streamlined VMware's 8,000 SKUs down to four core offerings:
Broadcom also revamped VMware's partner go-to-market strategy, requiring existing VMware partners to receive invitations to become Broadcom partners. Additionally, VMware transitioned approximately 2,000 top accounts from the channel to direct management, aligning with Broadcom’s broader strategy.
By focusing on a smaller customer base, Broadcom can significantly reduce sales and marketing expenses. This has led to substantial workforce reductions at VMware, including an estimated 30% cut in recent weeks. The 2024 VMworld conference was notably smaller, prompting speculation about its future.
Broadcom's VMware Acquisition and Its Impact on IT Infrastructure:
The Broadcom acquisition marks a significant transition in IT infrastructure management. Previously, VMware was the go-to hypervisor for IT infrastructure, offering predictability despite not being the most affordable choice. Companies could predict their expenses, and VMware's expansive ecosystem of supported products—ranging from hardware and security to backups and encryption—was a foundational element for countless organizations. Many built their IT ecosystems on VMware vSphere, never expecting that relying on top-tier virtualization technology would lead to vendor lock-in.
However, Broadcom's changes to licensing have introduced unexpected and considerable price hikes, with IT organizations now bracing for at least a 100% increase in costs. As one analyst put it, these changes seem like an "effort to divorce the customer."
This places companies in a challenging position, as replacing VMware as a hypervisor is no simple task. Even larger enterprises are feeling the strain. For instance, AT&T sued Broadcom for altering support contracts, aware of the difficulties involved in migrating to an alternative hypervisor.
However, this scenario also opens opportunities for competition.
Emerging Opportunities for Competitors in the Hypervisor Market:
For decades, VMware has led the server virtualization market. In 2020, the total addressable market for server virtualization was valued at $7.1 billion, with projections to reach $10.8 billion by 2027, underscoring virtualization's critical role in the architecture of cloud computing.
However, Broadcom's assertive strategy and increasing costs have created opportunities for competitors. Microsoft and Nutanix offer rival hypervisors, and they are building extensive partner ecosystems. Scale Computing's hyperconverged infrastructure (HCI) system directly challenges VMware Cloud Foundation. Google Cloud has been actively pursuing VMware's customer base.
Conclusion:
The completion of Broadcom's acquisition of VMware has brought significant changes over the past year. The shift from perpetual licenses to software subscriptions, the reduction of VMware's SKUs to four core offerings, and the restructuring of the partner go-to-market strategy have all been pivotal moves.
These changes have led to substantial VMware workforce reductions and a notable increase in costs for IT organizations. However, they have also opened up opportunities for competitors in the hypervisor market, such as Microsoft, Nutanix, and Scale Computing.
Looking ahead, VMware's journey under Broadcom will likely be characterized by a concentrated effort to capitalize on its core offerings and deepen relationships with its top-tier customer base. While this strategy offers potential for streamlined operations and focused innovation, it also presents challenges in maintaining broad community support and flexibility for customers.
In fifteen years, people may forget how critical VMware was to IT infrastructure operations, much as people have forgotten the contributions of Sun Microsystems. vSphere and vCenter may be a distant memory to most people as the IT landscape continues to evolve. Perhaps that is the lesson in the Broadcom acquisition of VMware: the only constant in life is change.

The Supercomputing (SC24) conference in Atlanta brought together some of the brightest minds in high-performance computing (HPC) to discuss some of the biggest challenges facing our planet.
After spending four days among the roughly 17,000, highly enthusiastic technologists gathered here, three highlights stand out in my mind.
Georgia Tech stole the show on opening night in the exhibit hall with a band performance that featured students playing instruments alongside a robot that plays the xylophone “by ear.” What struck me about the performance was not only the sheer joy it elicited, but the teaming of humans and robots – a topic that is one of many at the forefront of engineering research in academia today.
As the sound of Mr. Brightside (and several other hit songs) rung out from the Georgia Tech booth, crowds gathered with smartphones held high to get footage the delightful spectacle.
The robot, which is shaped like an arm, “listens” to music and plays along in real time.
“We have this robot that was put together by one of our school of music faculty – and it’s amazing because it can listen to what it’s hearing and then can play music that accompanies whatever it’s listening to,” said Professor David Sherrill of Georgia Tech. “It’s using AI techniques to do this.”
During the opening night panel, experts took center stage to discuss the ability of HPC to revolutionize industries and shape the future, delving into four arenas where HPC has made transformative impacts: scientific discovery, engineering for societal benefit, arts and entertainment, and workforce development.
Panelists showcased how HPC underpins advancements in sustainable aviation, realistic visual effects in films, astrophysics simulations, and quantum computing – emphasizing its capacity to manage complex, multi-faceted problems.
Panelist Silvia Zorzetti of Fermilab – an expert in quantum computing – likened the field to an artist’s toolkit, where instead of paintbrushes, scientists wield qubits to explore the fabric of reality. Unlike classical computing, quantum computers excel at simulating quantum phenomena — problems governed by the principles of quantum mechanics. Hybrid quantum computation, which is a synergy between classical and quantum computing, represents the future of the field, Silvia said.
Quantum computing helps to better understand nature and is particularly well-suited to solving molecular energy calculations, particle interactions, and quantum field simulations, Silvia explained. And liquid can be considered a quantum field.
“Now, something very cool happened and it was discovered by the Nobel laureate, that the liquids, the viscosity of the liquids have a certain minimum,” Silvia said. “So, you cannot have any liquid that has lower viscosity than some value. This value depends on the Planck constant. So this means that the liquids, which we can also call ‘fields,’ are quantized, just like the energy. And so now, if we put together the relativity and the quantum mechanics, we have what we call the quantum field theory.
“Now, why do we care? Because now the universe can also be described as a liquid, basically.”
The evolution of the universe, described through quantum mechanics and relativity, is an area where quantum computing shines, Silvia noted. Simulating the viscosity of the universe’s "liquid-like" state post-Big Bang could unlock answers to fundamental questions about cosmic evolution.
Quantum computing also has implications for national security and secure communitations because it offers unparalleled security through quantum channels, ensuring that intercepted data becomes meaningless without proper quantum keys, Silvia said. Quantum-enhanced predictive models could improve everything from weather forecasting to financial risk analysis, merging quantum simulations with classical HPC.
Dr. Nicola Fox, associate administrator for NASA's Science Mission Directorate, delivered a the SC24 keynote, highlighting the agency’s science mission, technological advancements, and inspiring exploration. With a focus on NASA’s commitment to explore the unknown and innovate for humanity, Dr. Fox emphasized three core themes: protecting life on Earth, searching for life elsewhere, and uncovering the universe's secrets.
She outlined the pivotal role of supercomputing in NASA's achievements, including processing vast datasets and advancing AI to accelerate scientific discovery. Dr. Fox cited missions like Voyager 1 and 2, which expanded our knowledge of the solar system and continue to explore interstellar space. She celebrated the integration of AI and machine learning, showcased by the PRISM geospatial model and other large-scale AI initiatives aimed at analyzing Earth's atmosphere, monitoring wildfires, and predicting weather patterns.
Highlighting the International Space Station’s (ISS) unique research platform, Dr. Fox described groundbreaking advancements in medicine, clean water technology, and agriculture, all driven by experiments in microgravity. These contributions extend beyond space, improving life on Earth and aiding future missions, including NASA’s Moon-to-Mars objectives under the Artemis program.
Dr. Fox also celebrated the James Webb Space Telescope’s (JWST) unprecedented capability to peer into the universe’s earliest moments. Dr. Fox shared awe-inspiring images, such as JWST’s first deep-field image and simulations of black holes, illustrating how supercomputing enables humanity to “visit” unreachable phenomena.
Dr. Fox explored NASA’s heliophysics and planetary science efforts, including Parker Solar Probe’s close encounters with the Sun and missions like Dragonfly, designed to explore Saturn's moon Titan. She also touched on Europa Clipper, which seeks signs of life on Jupiter’s moon Europa.
Closing with Carl Sagan’s “Pale Blue Dot,” Dr. Fox underscored NASA’s role in inspiring unity, curiosity, and hope. She invited the audience to collaborate with NASA, highlighting how their contributions drive groundbreaking discoveries and technological innovation that benefit all humanity.

In this podcast, PCI-SIG President Al Yanes explores PCI-SIG's journey to PCIe 7.0, advancements in copper and optical specs, and their pivotal role in HPC and AI.

What do you get when you combine football turf, Chick-fil-A, and a bunch of supercomputing operators? Today in Atlanta at the College Football Hall of Fame, that magical combination represented VAST Data’s latest update on its quest to become the operating system for AI with its VAST Data Platform. VAST Data knows the SC crowd well, having heritage in delivery of storage and data solutions for the HPC arena. Their advancement with the VAST Insight Engine and other solutions written about extensively on the TechArena represent a leap forward for VAST, delivering foundational tools to fuel AI training.
Today, VAST CEO Renen Hallak described this transition, highlighting that we’re on the cusp of our next inflection, moving from training to fine tuning and inference. VAST solutions, now integrated with RAG pipelines and vector databases, provide the tools for enterprises to take this bold leap forward with the confidence to keep data secure while delivering the performance offered by the world’s largest models.
This vision has been celebrated by the market with VAST deployments growing among the largest AI training providers and garnering a valuation of over $9 billion dollars based on momentum of its series E funding round. Today’s event was a terrific look into VAST’s traction in the market with leading AI players including XAI, CoreWeave, and Harvard Medical School.
The first up on deck was X-AI, Elon Musk’s AI arm, that just delivered a 100,000 GPU scale cluster called Colossus, in a mere 122 days. Why move so fast? X has notably been less engaged in AI training vs. other cloud behemoths, and Colossus places the firm in a position to accelerate their work quickly with the help of NVIDIA, Supermicro, and yes, VAST Data. The data platform was selected to drive foundational data management for this supercharged work with VAST engineers working closely with X-AI counterparts to get this Memphis based cluster up and running.
One customer that captured attention for their rapid buildout of AI services at scale was CoreWeave. CoreWeave’s Chief Product Officer, Chetan Kapoor, described his company’s mission to drive an accelerated compute platform, custom optimized for AI. Chetan stated “in our environment, you’re not going to find six hundred compute options, we’re specifically designed for AI.” The company has delivered with CoreWeave committing to H100 GPUs a year and a half ago and scaling to levels of performance focused on 100’s of megawatts of accelerated infrastructure and growing rapidly. This places CoreWeave on the path to potentially emerge as on par with the hyperscalers with continued market success.
CoreWeave has been built from the ground up with the VAST data platform. CoreWeave takes this combination of NVIDIA compute and VAST data management, and engages selective customers that are well prepared for AI adoption at scale. They’re targeting the 10,000 GPU scale for their customer base, a statement that highlights the relative demand for GPU cycles in market.
Today’s event wasn’t just about hyperscale. VAST also brought up Spencer Pruitt, CIO of Harvard Medical School, to highlight how scientific discovery is also being propelled by this AI driven innovation. Spencer shared how Harvard is driving genomics research to tackle challenges in disease management like cancer and personalized medicine with use of high-octane technology.
What’s the TechArena take? Data is the gold of this gilded age, and as long as companies are chasing insight from AI, the VAST data platform will continue to earn bold valuations in the marketplace. While much of this infrastructure buildout represents an AI arms race with repercussions beyond the scope of this article, the world will benefit from the accelerated innovation hyperscalers are driving as evidenced by what Harvard Medical School has driven with their data transformation. Color us continually impressed with VAST Data’s ability to execute within this high-pressure environment and waiting to see more innovation from this company in 2025.

Peter Dueben of the European Centre for Medium-Range Weather Forecasts explores the role of HPC and AI in advancing weather modeling, tackling climate challenges, and scaling predictions to the kilometer level.

This session will explore the key infrastructure considerations for developing AI applications in industries such as the life sciences, healthcare and academia, and how edge-specific infrastructure, combined with PEAK:AIO's proven solutions, is critical to meet the demands of these sectors. Join us to discover how PEAK:AIO’s innovative infrastructure can support organizations to harness the power of AI, enabling them to execute transformative ideas and drive significant advancements.

In this illuminating TechArena Fireside Chat, Cornelis Networks’ Lisa Spelman shares deep insights on leadership, team, embracing risk, and why she chose the ‘next great optimization frontier.’

In this podcast, NCSA Director Bill Gropp explores the latest advanced computing trends, from AI innovations to groundbreaking research on supercomputing climate models, and how this tech is transforming science and society.

Discover how AI is transforming data centers, with innovations in high-speed networking, emulation, and hyperscale infrastructure driving efficiency and performance in the era of AI workloads.

There isn't a conference as inspiring in the tech sector as Supercomputing (SC24), the annual gathering of the top technical computing centers on the planet, and the industry that supports them with the latest advancements in large scale tech.
While AI has taken center stage in every technology conversation, it's important to remember that the massively parallel computing delivered by HPC clusters are both the precursors of AI training clusters and the machines that have opened up calculations of some of the boldest scientific projects in history.
Here are just a few examples of things HPC clusters are studying today and poised to to advance tomorrow:
– Complex systems represented by climate and Earth's evolution
– Mapping of weather patterns
– Peering into the outer reaches of space
–The future of disease management
At TechArena, we are plotting a full lineup of interviews from the leading researchers in the field today, and will be publishing throughout the week on how these institutes are pushing scientific discovery forward with massive scale computing as their tool. We'll seek out insights from national labs across the globe as well as university centers engaged in collaborative studies to propel academic research forward. And, of course, we'll peer into the industry innovation that underpins advancement of supercomputing clusters. What are we looking for? Introducing our top five top trends we're tracking at SC24:
Following TechArena: Buckle up and set your eyes on the stars, as TechArena's platform is about to be inundated with inspirational stories of what our sector does best. If you enjoy this coverage, please give us a follow on your favorite social or content platform and subscribe to our newsletter below!

Four months into her tenure as CEO of Cornelis Networks, Lisa Spelman sat down with TechArena to chat about leadership, how tech has changed, her vision for AI innovation, what’s on her playlist – and so much more.
A veteran of the data center and AI industry known for her deep technical expertise, Spelman brings over two decades of leadership experience from Intel to her new role.
We learned that she has perfectionist leanings, favors action, values collective expertise, is a diligent Pelotoner – and a Swiftie.
Dive into our Q&A with Lisa to learn more about her fascinating journey in tech and where she plans to take Cornelis Networks:
1) Q: Lisa, you recently took the helm of Cornellis Networks as CEO. What’s it like to take the reins of a company?
A: It’s an honor, truly. Having a board of directors and a group of founders choose you as the person to help lead the company to the next phase of growth is an honor. I am really enjoying getting to know and work with such a smart and capable team. I also really like the variety: everything comes at you. In a day, you can be in an architecture discussion, setting a sales commission plan, deciding on a new parental leave policy, recording a podcast, and meeting with a potential investor.
2) Q: You've led a lot of senior teams in your time. When you're considering building team cohesion, is there anything you've found to be your secret to success?
A: I’m a big believer in individual accountability, but with a thread of mutual reliance for success. I have often set up teams of mine to need each other in order to succeed. Some people like to pull it all apart and have more of an “every person for themselves” set up. I like to get people to lean in a bit and have to care about each other’s joint success. I’m not sure there’s one perfect recipe, and I will say that cohesion does not mean always getting along or agreeing on everything.
3) Q: For those who are newer to senior leadership, what mistakes did you make in this regard that you'd wish someone had told you about?
A: Oh, I won’t say no one told me, but did I listen? I remember when I took on my first really big team. I was in IT and took a role leading a large, technical, global team. We had people in 50 countries! I so clearly remember the pressure I put on myself that I had to be the one with every answer. I thought, ‘What use am I to them if I get asked a question and I don’t have the best answer?’ Classic perfectionist behavior. I learned pretty quickly, though, that so much progress and scale comes from harnessing all of the incredible knowledge you have in your team. Your power becomes the collective of all of those experiences.
4) Q: You've spent your entire career in tech. What do you see about the 2024 tech era compared to when you entered this arena?
A: Oh my gosh, when I entered the arena 100 years ago? 😊 I will say that the pace of progress and change is astounding. But some of the fundamentals are the same. Tech and scientific discovery are all about what is just out of reach: I can see the idea, but I haven’t quite yet figured out how to get there. It’s what I love about engineering, and I say this all the time – what was impossible on Friday is possible by Monday… and even more elegant by Wednesday (as long as you don’t have too many meetings).
I also see that big ideas and innovation are coming from everywhere. Technology itself has democratized how accessible it is to bring an idea to fruition. Look at my company building an end-to-end network for scale out AI and HPC. We taped out our switch and our adapter ASICs with less than 50 HW engineers, high quality, full emulation. That’s astounding when you think about it.
5) Q: What does this mean in terms of your approach to leadership and engagement in the industry?
A: My leadership philosophy goes back to scale. Everything is moving too fast; you have to create systems of trust and rely on collective expertise to set the pace. By the way, relying on collective expertise is not the same as requiring consensus, that is slooooow and leads to excessive politics in an organization. As far as the industry, I see it as my role to spend about 50% of my time outside of the company. Whether that’s with customers, investors, recruiting, suppliers, potential partners, peers. It’s my job to stay on the pulse of the industry and use that knowledge to expand the worldview of the team.
6) Q: You are also a bit of a unicorn as a female CEO of a tech infrastructure company. What do you think is imperative for female leaders in this industry, and how do you think the tech sector has evolved in terms of equal opportunity for women?
A: That’s the goal right? A unicorn leading a unicorn. 😊 I think it’s imperative for all leaders to do deep work understanding their strengths and areas where they need support. Then use that knowledge as you build out the best possible team to compliment your skillsets, fill in any gaps and to challenge you. I tell all my teams that if they are sitting around the table agreeing with me, then I don’t really need them. I enjoy hearty debate. But I have to create space for that and make it part of the culture.
I think for women specifically, the likability tax is real. Study after study shows that both men and women hold women to different standards on how likable they should be as part of effective leadership. And it is a tax, it’s draining. Every leader has to think about how they show up, but I think that women and minorities carry an extra burden there. It takes energy to be extremely self-controlled or to wear a mask.
7) Q: I need to work in AI here, but we aren't going to deep dive on tech. I really want to ask you about your view on the promise of AI and if you think our industry will deliver to the vision being touted?
A: I think we are just at the start of what’s possible. I know that AI can raise a lot of fears in people, and I know that it is being shoved in some places that just don’t make sense. Have you heard the one about using AI in drive throughs to recommend the right beverage? It’s hot out so recommend a diet coke instead of coffee. That’s not AI, it’s common sense! But despite that, despite the AI washing of everything, I really think we are heading into some amazing breakthroughs that will improve life for humans.
8) Q: What practices or habits do you rely on to keep your mind sharp and your energy up amid the demands of being a CEO?
A: Exercise, I love my Peloton. Walking and being outside.
9) Q: What advice would you give your 23-year-old self?
A: Ok this is really cliché, but don’t worry so much about what other people think. Some people are going to think you’re great, others are going to find you less so. It’s fine. Focus on what interests and excites your brain, not what someone else thinks you should do. Also, keep taking risks. My move overseas, my move to IT, my move to a startup, these have all been risks and each one has turned out better than I could have imagined
10) Q: What team(s) do you root for?
A: We went to the Blazer game on Sunday night, it was painful, not a good year. The Grizzlies have a fun player from Japan who’s only 5’8.” He’s an amazing ball handler and passer, the crowd was going crazy cheering for him and he was on the other team!
11) Q: What is the book that you most recommend?
A: I read a lot for work, but I like to read for fun too. I think it’s good to have a lot of choices so you always have something that’s right for your mood. Have a book, a few magazines, newspapers. I have really enjoyed reading a bunch of Frederick Bachman’s books.
12) Q: What’s at the top of your playlist?
A: I like everything, truly. But I must say I am a Swiftie!
13) Q: What superpower do you have that people don’t know about you?
A: I think I have a good ability to see everyone’s POV and the motivation behind it. It’s not so much acknowledging someone else’s point, which is just good listening and also valuable. But I think I do a pretty good job of getting to the why, and that helps me working with and leading people.
14) Q: Final question: At the end of your career, how do you want to leave your mark on those you lead and collaborate with?
A: We had a lot of fun while we did hard, cool stuff.