Explore the cutting edge of computing from data center to edge including solutions unlocking the AI pipeline, all backed by Solidigm's leading SSD portfolio.

In today’s rapidly evolving IT landscape, flexibility and adaptability are key to meeting the demands of businesses seeking innovative, scalable solutions. This shift in customer needs came into sharp focus at CloudFest 2025, where we sat down with Steve Gutierrez, director of sales at Solidigm, and Arun Garg, founder and CEO of Taurus Group. With a focus on open platforms and best-of-breed components, Taurus is providing businesses with the freedom to build the infrastructures they need, without being tied to a single-vendor, one-size-fits-all solution.
Taurus Group’s approach is centered around offering a diverse array of options to its partners, tapping into open compute platforms and leveraging the expertise of its sister company, Circle B, a specialized provider for the Open Compute Project Foundation (OCP). Alongside this, Taurus works closely with Cluster Vision to deliver high-performance computing solutions and open-source cluster management tools. These collaborations give Taurus the tools to provide cutting-edge, flexible infrastructure options while remaining agile enough to adapt to the unique demands of their clients.
Arun highlighted how important it is for businesses today to have the freedom to choose the best components for their needs, rather than relying on a single vendor’s ecosystem. The days of restrictive, monolithic IT systems with one-size-fits-all solutions are long gone — today’s customers are looking for more flexibility and the ability to customize their infrastructure.
Solidigm’s products, such as the recently launched 122TB storage solution, perfectly complement Taurus Group’s portfolio, offering innovative, reliable options for clients in need of scalable storage. According to Arun, customers appreciate the collaboration with Solidigm because it ensures that they can deliver the best possible solutions with the added benefit of excellent support from Solidigm’s technical and support teams.
What’s the TechArena take? The collaboration between Taurus Group and Solidigm is a great example of how true partnerships can help businesses deliver more powerful solutions to their customers. It’s clear that as companies like Taurus continue to push for openness and flexibility in the IT space, they’re creating a future where businesses are no longer constrained by traditional, single-vendor systems, but instead have access to a wide range of tools and solutions that are tailored to their needs.
For anyone looking to explore more about Taurus Group and how they can leverage the latest storage solutions, reach out directly through their website, tauruseu.com. With a clear focus on building strong, flexible partnerships, Taurus Group is poised to continue playing a key role in the future of IT infrastructure.

In a landscape often dominated by hyperscalers and familiar names, Scaleway is quietly rewriting the rules. With a clear vision and a sharp focus on what modern cloud scalers actually need, it’s stepping into a role that feels both timely and transformative. In a recent conversation between Solidigm’s Conor Doherty, field applications manager at Solidigm, and Yann-Guirec Manac’h, head of hardware R&D at Scaleway, we get a closer look at how this European cloud provider is not only keeping pace with hyperscale trends, but is also helping to shape them through focused innovation in AI infrastructure and sustainability.
Scaleway’s approach feels refreshingly grounded. It focuses on delivering a complete foundation — compute, network, storage — and binding it all together with a unified control plane that supports both traditional and AI workloads. But what really sets them apart is the dual-track strategy for AI: accessible GPU instances for smaller-scale use, and massive, tightly interconnected GPU clusters for the heavy-duty training jobs. It’s the kind of infrastructure that recognizes how AI work isn’t a one-size-fits-all operation — some tasks need 2 GPUs, while others demand thousands. Scaleway is designing for both.
Gaining a deeper understanding of Scaleway’s AI strategy starts with examining the data pipeline. As Yann-Guirec put it, AI training isn’t just compute-heavy — it’s about data complexity, scale and flow. Harvesting and curating vast datasets, handling throughput during training, managing checkpoints and doing inference all require different storage strategies. Cold storage for archival compliance, warm layers for preparation and hot storage for training — each has different hardware implications. It’s not just about speed, it’s about adaptability, and Scaleway’s infrastructure acknowledges that every phase in the AI pipeline has unique demands.
With the conversation around sustainability finally taking center stage in tech, Scaleway’s stance is more than a footnote — it’s core to their identity. Backed by the Iliad Group, the company has built and operates data centers that run on 100% renewable energy. DC5, the data center that houses a lot of their AI pods, forgoes traditional air conditioning in favor of adiabatic and free cooling methods. The result is dramatically lower power usage effectiveness without sacrificing performance. But Yann-Guirec takes it a step further, pointing out a rarely discussed metric: water usage. Scaleway is looking at water usage effectiveness as well, with a view toward responsible innovation that doesn’t overlook environmental cost.
What’s perhaps most fascinating is how Scaleway sees the future of AI training workloads. Today’s language models may be grounded in web-scale text, but tomorrow’s models — multimodal, agentic and domain-specific — will need exponentially more data across formats, such as images, audio and video. That means even more demand on both GPU throughput and the bandwidth feeding those GPUs. Scaleway is building toward this future now, with GPU pod systems capable of pushing hundreds of gigabits per second and storage systems built to scale with that need.
While big names in AI infrastructure often dominate the narrative, conversations like this remind us that serious innovation is happening beyond the usual suspects.
So, what’s the TechArena take? Scaleway isn’t trying to be everything to everyone – but for teams building sophisticated AI pipelines, in Europe and beyond, it’s quickly becoming a name to watch.
To dive deeper, visit scaleway.com.

At the NVIDIA GTC conference, where the latest advancements in AI and high-performance computing (HPC) are on display, Scott Shadley, director of leadership narrative and evangelist of Solidigm, sat down with Carrie Wang, sales marketing at Giga Computing, and TechArena, to discuss the rapidly evolving landscape of enterprise technology and AI infrastructure. Our conversation shed light on the critical role data centers and efficient computing play in addressing the growing demand for AI-driven workloads.
Giga Computing is making significant strides in the AI and HPC arenas. Carrie spoke about the company’s journey, noting how GIGABYTE’s server business began with a small team back in 2000, ultimately evolving into a leader in the AI infrastructure sector. Today, Giga Computing, a wholly owned subsidiary of GIGABYTE, is shaping the future of high-performance AI, cloud and HPC solutions for businesses worldwide.
As AI adoption surges across industries, the need for power-efficient and scalable infrastructure has never been more critical. Carrie highlighted that AI inferencing, the process of applying trained models to real-world data, is becoming more power efficient, but the growing demands of AI workloads mean that data-intensive processes are increasingly requiring better storage, networking and compute solutions.
One of the standout solutions Giga Computing is showcasing this year is the GIGABYTE G893 GPU server platform. With support for NVIDIA HGX™B200 and NVIDIA HGX™ B300 NVL16, this platform is specifically designed to handle the most demanding AI and HPC workloads. Paired with NVIDIA BlueField®-3SuperNICs, the G893 delivers outstanding performance while minimizing energy consumption – a key concern as data centers grapple with rising power demands. Additionally, Giga Computing’s innovative cooling solutions, including the GIGABYTEG4L3 platform, ensure that these powerful systems run efficiently even under heavy loads. At the conference, Giga Computing presented a comprehensive solution for data centers, featuring a direct liquid cooling (DLC) rack system that combines multiple racks for a fully integrated solution.
The conversation also touched on how Giga Computing is addressing the challenges faced by the industry. With AI workloads continuing to surge, the demand for power-hungry data centers is driving up rental rates, with increases of up to 6.5% compared to the first half of 2023. In this environment, businesses must carefully select the right components to maximize performance while keeping power consumption in check. It’s a delicate balancing act, and Giga Computing’s solutions are designed to help customers optimize their infrastructure to address these challenges.
Another topic discussed was the rise of agentic AI — systems capable of making decisions autonomously based on real-time data. Carrie emphasized that agentic AI relies heavily on inferencing, and Solidigm’s NVMe solid state drives (SSDs) play a critical role in supporting these models. With Solidigm’s cutting-edge storage solutions, customers can efficiently handle large-scale datasets in their data centers, minimizing delays and ensuring low costs and power consumption.
So, what’s the TechArena take? As AI, HPC, and cloud workloads continue to evolve, it’s clear that collaboration between companies like Solidigm and Giga Computing is key to driving the next wave of innovation. By working together to deliver integrated solutions that prioritize performance, power efficiency, and scalability, these companies are setting the stage for the future of AI infrastructure.
Watch the full video here.
For those looking to learn more about Giga Computing and their groundbreaking solutions, you can visit their website, gigabyte.com/Enterprise, or follow them on LinkedIn and Twitter to keep up to date on their latest offerings and innovations.

Solidigm's Roger Corell chats with ICE's Anand Pradhan to explore how AI, storage, and system design fuel 700B+ daily trades — and what AI inference means for the future of storage at scale.

At the forefront of IT infrastructure advancements, ASBIS has been making significant strides in providing high-performance solutions, particularly in the transition from traditional hard drives to solid-state drives (SSDs). This shift, driven by the growing demand for speed, reliability and energy efficiency, has seen ASBIS collaborate closely with Solidigm, a leader in the SSD industry, to deliver cutting-edge storage solutions. Eduards Lazdins, business development manager at ASBIS, shared insights into the company’s approach during a recent discussion with Steve Gutierrez, director of sales at Solidigm, and TechArena
ASBIS, a value-added distributor and solution provider, has an expansive footprint that spans 28 countries, with over 20,000 active customers across 60 nations. Its reach is impressive, but it’s the company’s ability to address the unique needs of diverse markets that really stands out.
ABSIS serves a mix of established and emerging markets facing an array of opportunities and challenges for IT infrastructure development. While Western Europe leads in cloud adoption and AI-driven workloads, regions like Central and Eastern Europe, the Caucasus and parts of the Middle East and Africa are catching up rapidly. The challenge is striking the right balance between affordability and performance. This is where ASBIS excels — by offering high-performance solutions at scalable price points, making cutting-edge technology accessible to a wide range of customers.
ASBIS’ approach to staying ahead in the competitive IT landscape involves constantly expanding its product portfolio and geographical reach. The company is not just a traditional distributor — it operates its own server assembly line in the European Union, with the capacity to produce thousands of servers per month. This vertical integration, combined with strong partnerships with leading brands, such as Solidigm, enables ASBIS to deliver tailored solutions that meet the specific needs of its diverse customer base.
A key focus of ASBIS’ strategy has been the transition to SSDs, and it has leveraged its collaboration with Solidigm to accelerate this shift. SSDs provide significant advantages over traditional hard drives, such as faster speeds, improved reliability and lower power consumption. As businesses increasingly turn to SSDs for their storage needs, ASBIS is helping customers adopt this next-generation technology with specialized services, including pre-sales consultancy, robotic solutions, technical support and custom solutions.
One example of ASBIS’ impact is its work with cloud service providers and data centers. By integrating Solidigm’s SSDs into their infrastructure, ASBIS has helped optimize storage solutions, significantly reducing latency and improving workload efficiency. For industries handling vast amounts of data, these improvements in speed and reliability are essential. ASBIS’ solutions have not only enhanced performance, but also minimized power consumption, a critical factor for organizations aiming to reduce operational costs and carbon footprints.
What’s the TechArena take? In a world where technology is rapidly advancing, ASBIS’ commitment to providing innovative, high-performance solutions is a prime example of how the right partnerships and a focus on customer needs can drive real-world results. Looking ahead, ASBIS is well-positioned to continue leading the charge in IT infrastructure solutions, offering both the expertise and technology to meet the evolving needs of its customers.
To learn more about their product offers and solutions, visit ASBIS.com, or follow them on LinkedIn and X for the latest updates.

At CloudFest 2025, Supermicro showcased their innovations that are driving the future of AI, cloud infrastructure and storage solutions. As AI technology continues to evolve, Supermicro’s ability to deliver cutting-edge hardware solutions has become a game-changer, and their booth at CloudFest served as a testament to that progress. Solidigm’s Hayley Corell spoke with Thomas Jorgensen, senior director in the Technology Enabling Group at Supermicro,to dive deeper into how Supermicro is powering AI advancements and meeting the growing demands of modern infrastructure.
Thomas highlighted the rapid growth in AI, noting that the demand for powerful AI infrastructure is being driven by large-scale model training and, increasingly, AI inferencing. But what's crucial for this advancement? A center element of that answer is storage. Supermicro understands that AI models require fast, reliable storage to keep GPUs from idling, ensuring that the entire infrastructure is working in concert to deliver results as quickly as possible. As Thomas bluntly put it, “AI doesn't work without storage,” and Supermicro is delivering solutions to meet this growing demand.
Over the past few years, AI’s exponential growth has shifted the way companies are approaching infrastructure. As Supermicro and the rest of the industry ride the wave from training centric infrastructure demand to a demand curve that also reflects inference, new kinds of infrastructure for a wider range of environments is required. As AI continues to be specifically integrated into edge environments, Supermicro is positioning itself at the forefront, enabling AI at the edge with small, fanless servers that process inferencing directly where data is generated. This localized approach reduces latency, and it also improves the speed at which data is processed, ensuring AI workloads perform seamlessly.
A key part of Supermicro’s success lies in its commitment to delivering high-performance, low-latency infrastructure. Thomas discussed how AI clusters require not only powerful GPUs, but also fast network communication and efficient storage systems. The infrastructure design has evolved significantly to meet the demands of AI, particularly with the rise of high-density petascale storage solutions. Supermicro’s focus on providing multi-tiered storage setups ensures that data is delivered at optimal speeds for any given AI workload, enabling seamless performance across AI applications.
The collaboration between Solidigm and Supermicro has been crucial in driving these advancements, particularly in the realm of high-speed storage. Solidigm’s cutting-edge storage solutions, such as their high-capacity SSDs, perfectly complement Supermicro’s AI infrastructure. By combining Solidigm’s innovative storage technology with Supermicro’s powerful hardware, they deliver the performance and reliability required to handle the intense data demands of AI workloads.
This collaboration helps ensure that AI models can access and process data quickly, making it an essential part of AI-driven infrastructure.
Supermicro's petascale storage is capable of integrating up to 122 terabyte SSDs. This massive capacity allows AI workloads to scale up and manage vast amounts of data with ease.
So, what’s the TechArena take? For on-prem AI deployments, tapping large volumes of data locally for AI integration across business functions is becoming increasingly critical, especially as many businesses shift away from the cloud due to rising costs and data privacy concerns. Supermicro’s petascale storage delivers the speed and bandwidth needed to support the growing demands of AI models, ensuring that organizations can keep up with both the scale and complexity of modern AI workloads right from their own data centers. Solidigm’s leading 122 TB drives are a perfect match for these large scale deployments.
For those looking to learn more, Supermicro offers an abundance of resources on their website (supermicro.com) and social media channels (X, LinkedIn and YouTube).

From eight-way GPU racks to liquid cooling breakthroughs, Giga Computing and Solidigm explore what it takes to support AI, HPC, and cloud workloads in a power-constrained world.

Scaleway’s Yann-Guirec Manac'h shares how the company is simplifying complex AI pipelines, maximizing SSD performance, and driving sustainable innovation in European cloud infrastructure.


At GTC 2025, Arm’s Chloe Ma explains how AI is shifting from compute to full-system optimization — and why storage, inference, and the edge are becoming central to tomorrow’s intelligent infrastructure.

With NVIDIA CEO Jensen Huang’s headline-grabbing reference to GTC as the “Super Bowl of AI,” expectations for this year’s conference were sky-high — and key players delivered. Among the standout innovations was Alluxio’s contribution to transforming how data is managed and accelerated in AI workloads. Scott Shadley, Director of Leadership Narrative and Evangelist at Solidigm, joined Bin Fan, Founding Engineer and VP of Technology of Alluxio, to discuss how their team has been pushing the envelope in AI data acceleration and efficient storage management, and quickly establishing a tangible impact on how AI models are trained and deployed across industries.
At the heart of Alluxio’s innovation is its ability to decouple storage and compute. Traditionally, data storage has been tightly coupled with compute resources, limiting the scalability and speed of AI workloads. But with Alluxio’s technology, data scientists and AI modelers no longer need to worry about the complexities of storage management. Instead, Alluxio introduces an abstraction layer between applications and storage, making data access seamless and efficient.
One of the most compelling aspects of Alluxio is its ability to accelerate data access. By positioning Alluxio close to GPU applications, the technology significantly reduces the time it takes to access large datasets, especially in geographically dispersed environments. This is particularly important for AI workloads that require massive amounts of data across different regions or clouds. With Alluxio’s caching layer, repeated data access is minimized, ensuring that applications are running at peak performance without the usual latency or overhead.
But Alluxio isn’t just about speeding things up – it also brings simplicity and flexibility to the table. By abstracting storage into a unified structure, their solution enables organizations to seamlessly manage their data across multiple on-prem and cloud deployments without the hassle of manual configuration or inconsistent access. Whether it's scaling up GPUs in one region or shifting workloads to another, Alluxio’s virtualization and abstraction layers provide a seamless experience for both data engineers and end users.
To meet these varied and demanding workloads, Alluxio has partnered with Solidigm to provide reliable, high-capacity storage solutions. While Alluxio serves as the software layer for managing data storage, Solidigm brings its experience as a leading supplier of SSDs to offer the ideal hardware for Alluxio’s caching layer. Together, this collaboration ensures that AI workloads are running on the fastest, most reliable infrastructure possible. The ability to store and retrieve data efficiently is essential in today’s fast-paced AI landscape, and Alluxio’s integration with Solidigm hardware delivers that performance without compromise. (Learn more about data storage optimized for the AI era.)
Alluxio is doing more than just keeping up with the growing demand for AI infrastructure — it’s leading the way in making data management simpler, faster, and more efficient. As AI continues to evolve, technologies like Alluxio will be at the forefront, empowering organizations to harness the full potential of their data.
For anyone curious about diving deeper into Alluxio’s capabilities, the company’s website, alluxio.io, and social media channels, including YouTube, offer a wealth of resources. Watch the full video here.

Peak AIO’s Roger Cummings joins Solidigm’s Scott Shadley at NVIDIA GTC to talk AI infrastructure shifts, single-node innovation, and making data placement as intelligent as the AI it powers.

At NVIDIA GTC 2025, Cloudflare shared an exciting vision for the future of AI, automation, and developer tools. During a conversation with Scott Shadley, Director of Leadership Narrative and Evangelist at Solidigm, Aly Cabral, Cloudflare VP of Developer GTM, explained how they are becoming a critical player in the rapidly evolving tech landscape. As industries shift and change, Cloudflare’s focus is clear: empowering developers to navigate these transformations with the right tools and ample support.
One of the main topics of discussion was agentic AI, and how to define the loosely used term that’s been all the rage for next-gen AI predictions. Simply put, agentic AI goes beyond traditional automation by enabling systems to make decisions and manage more complex, dynamic tasks autonomously. While automation improves efficiency, agentic AI adds intelligent oversight, making it easier to both monitor and manage automated systems. Aly emphasized that automation alone isn’t sufficient — it’s about creating systems that are not only efficient, but also transparent and easy to troubleshoot. Cloudflare’s Workflows product addresses this by giving developers visibility into complex systems, helping them quickly identify and resolve issues in multi-step processes. This capability is becoming even more essential as automation plays a larger role in development.
In addition to automation, CodeGen tools are emerging as valuable resources for developers. These AI-powered tools simplify the coding process, allowing developers to generate code faster and with less effort. However, as Aly pointed out, the real challenge lies not in creating applications, but in managing and maintaining them over time. Cloudflare’s platform is built to support developers throughout the entire lifecycle of an application — from creation to long-term management — ensuring that systems remain scalable, secure, and efficient as they evolve.
Looking forward, Cloudflare is doubling down on AI and developer tools. As Aly mentioned, the company is preparing for Developer Week in April, where they’ll unveil new launches and innovations aimed at improving the developer experience. With new features and tools focused on simplifying the development process and harnessing the power of AI, Cloudflare is working to ensure developers have everything they need to create smarter, more scalable applications.
So, what’s the TechArena take? Cloudflare’s approach to partnerships sets them apart. Rather than locking customers into proprietary ecosystems, Cloudflare prides itself on being a connector, offering a globally distributed network and an open ecosystem that integrates well with a wide variety of third-party services – without an aggressive egress tax. This flexibility allows developers to use the best tools for their needs without being restricted to a specific platform. It’s this open approach that makes Cloudflare an ideal partner for companies like Solidigm, who offer unique solutions that complement Cloudflare’s services.
Watch the full video here. Learn more about Solidigm’s data storage solutions for the AI era here.
For those interested in staying connected with Cloudflare’s latest developments, the company maintains an active Discord community, YouTube channel, and X presence, providing ample opportunities for engagement and learning. Or visit their website.

I recently had the opportunity to sit down with Jeniece Wnorowski of Solidigm, and George Crump, Chief Marketing Officer at Verge.io, to discuss how Verge.io is taking a fresh approach to IT solutions.
The Verge.io solution is game-changing for enterprises looking to optimize their IT infrastructure. Instead of relying on the traditional method of integrating different components through a GUI (Graphical User Interface), Verge.io has gone a step further by combining all aspects of IT infrastructure—networking, storage, and virtualization—into a single, unified code base. This results in a seamless user experience coupled with a dramatic improvement in performance - all while lowering hardware requirements.
By integrating these components, Verge.io improves efficiency and increases hardware flexibility. George shared that Verge.io supports hardware up to seven years old, making it highly adaptable for organizations with legacy systems. This innovation stems from Verge.io’s founding story — Greg Campbell, Verge.io’s CTO, was initially frustrated by the amount of time he spent managing infrastructure while developing a search engine to compete with Google and Amazon – so he decided to build his own solution. Today, this approach has resulted in a product that runs on less than 300,000 lines of code, ensuring fewer bugs and greater reliability compared to traditional solutions that often operate on millions of lines of code.
The solution also addresses one of the most pressing concerns for IT professionals today—cost predictability. Verge.io’s simple licensing model charges per server rather than by core or capacity. This straightforward pricing model appeals to a wide range of IT professionals. By supporting multi-tenancy, Verge.io makes it easier to manage resources across various clients, delivering efficiency and flexibility in a shared environment.
As AI continues to shape the future of IT infrastructure, Verge.io’s platform is designed to be AI-ready. George highlighted the growing importance of AI in enterprise workloads, particularly as organizations explore the use of private AI models. Verge.io is addressing the challenge by ensuring its platform can easily integrate GPUs for AI workloads.
One of the key factors that sets Verge.io apart from others in the market is its approach to migration. As George pointed out, migration is essential when transitioning to a new infrastructure. He also shared that Verge.io’s approach ensures minimal downtime, enabling businesses to shift their data and settings with greater efficiency. This streamlined migration process is key to Verge.io’s ability to deliver a seamless experience for their customers, no matter the scale.
In addition to seamless migration, Verge.io focuses heavily on reliability through its integrated platform. George pointed out that the system continuously monitors the environment, ensuring operations run smoothly even during network disruptions. For example, he described how Verge.io's platform maintained data integrity during network failures when traditional infrastructure would have struggled to do the same.
For enterprises considering storage solutions, Verge.io is leveraging Solidigm’s storage technology to optimize performance and lifespan. George shared how the platform supports various classes of SSDs, including QLC and TLC drives, and integrates them seamlessly into the infrastructure to meet workload demands. This approach ensures that organizations can optimize performance while managing their storage needs efficiently.
So, what’s the TechArena take? Verge.io’s unified infrastructure solution is reshaping the way organizations manage their IT environments. With a focus on cost predictability, AI-readiness, and seamless migration, Verge.io presents a compelling option for businesses that are looking to simplify their infrastructure and improve efficiency.
Listen to our full discussion here.

In this Data Insights episode, Andrew De La Torre discusses how Oracle is leveraging AIOps to enable automation and optimize operations, transforming the future of telecom.

During GTC, Solidigm’s Scott Shadley and Dell’s Rob Hunsaker, director of engineering technologists, discussed how Dell is tackling the challenges of AI data infrastructure with cutting-edge solutions.

Tune in to our latest episode of In the Arena to discover how Verge.io’s unified infrastructure platform simplifies IT management, boosts efficiency, & prepares data centers for the AI-driven future.

Join us on Data Insights as Mark Klarzynski from PEAK:AIO explores how high-performance AI storage is driving innovation in conservation, health care, and edge computing for a sustainable future.

When you think of Gigabyte, gaming hardware probably comes to mind. But this Taiwan-based computer hardware manufacturer develops much more than motherboards and graphics cards – they provide a spectrum of computer hardware as well as liquid and immersion cooling and have a long history of contributing to open standards and advancing server technologies.
I recently had the pleasure of chatting with Chen Lee, VP of Sales, HPC, Data Center and Enterprise for Giga Computing and learned a fascinating tidbit about how the company became involved in the Open Compute Project Foundation (OCP).
“Around 2004, this very little-known company came to us and said, ‘We’ve got a search engine, and we want to build this motherboard and this thing called OpenRack,’” Chen explained.
The little-known company was Google, he said.
“So that's how we got into OCP,” Chen said. “(Gigabyte was) actually the first company to help Google develop open compute.”
Gigabyte’s collaboration with Google on OpenRack marked the company’s entry into the open infrastructure movement, making them one of the initial contributors to OCP standards.
Today, Gigabyte’s portfolio extends beyond Intel and AMD servers — they also produce Arm-based solutions using Ampere technology and specialize in advanced cooling systems like immersion and direct-to-chip liquid cooling. With this holistic approach, they continue to drive efficiency and performance in the data center space, reflecting their adaptability and forward-thinking approach.
Embracing AI: Gigabyte's Focus on GPU Servers
Artificial Intelligence (AI) has reshaped the demands on data centers, particularly in terms of computing power and infrastructure. Chen discussed how Gigabyte has been positioning itself in the AI hardware game, particularly through high-density GPU servers. He shared a pivotal moment for Gigabyte in 2010 when they introduced a 2U server that could support eight double-wide, dual-link GPUs, which at the time was the highest density on the market.
Today, Gigabyte’s expertise in GPU servers continues to be an asset, providing systems for AI model training and inferencing, using cutting-edge GPUs like Nvidia’s H100 and soon, Blackwell. As AI shifts towards edge deployments, Gigabyte is also preparing for the growing importance of edge inferencing, which Chen predicts will be a significant area of growth in the near future. Industries such as medical, finance, and retail are moving fast to adopt AI solutions at the edge, from convenience store smart shelving to real-time customer analytics. Gigabyte is ready to meet these needs with high-performance, scalable server technology that suits the unique challenges of edge computing.
Liquid Cooling and Efficiency in Data Centers
The demand for powerful servers to support AI training and inferencing has pushed energy consumption to unprecedented levels, making cooling a top priority. Chen highlighted how immersion and direct liquid cooling are allowing Gigabyte to manage energy efficiency better while meeting the needs of customers working on advanced AI projects. It’s a testament to the company’s adaptability and focus on sustainable solutions—aligning well with the OCP’s values of open innovation and energy efficiency.
AI Beyond the Data Center: The Future of Inference
Chen and I also discussed moving from centralized data center training to inferencing at the edge. Today, most inferencing still happens within large data centers, using high-power systems designed for training. But Chen believes that as AI technologies mature, edge inferencing will become critical—allowing smaller, more efficient hardware to perform tasks where the data is generated, such as in retail stores, hospitals, and banks.
Chen shared an interesting example involving a convenience store, where AI systems can detect customer behavior in real-time and use edge servers tucked away in the back to provide analytics directly to the headquarters. The potential for rapid, on-site AI-driven insights will push industries to adopt smaller-scale AI inferencing solutions—a market that Gigabyte is well-positioned to serve.
This shift to the edge will transform how AI is implemented across industries, bringing smarter technology closer to users and changing how data centers interact with local environments. Chen also shared that, in his view, AI isn’t just a passing trend—it’s a new wave that’s here to stay.
So what’s the TechArena take? As AI evolves and the infrastructure to support it becomes more advanced, hats off to Gigabyte for doubling down on its strengths—high-performance GPU servers, innovative cooling technologies, and partnerships with leading hardware and storage vendors.
Thanks to Solidigm for sponsoring this delightful Data Insights discussion. In case you missed it, check out the full episode here. As AI and edge computing continue to advance, the innovations coming from companies like Gigabyte are paving the way for the data centers of tomorrow.

While AMD has been consistent in recognizing the new demands of AI-enabled applications, the company remains steadfast in ensuring that AMD EPYCTM processors continue to offer leading performance for traditional compute workloads, such as HPC, database, cloud native applications, collaboration systems, finance, and more.
I recently caught up with Ravi Kuppuswamy, AMD Senior Vice President of Server Product & Engineering, to explore the company’s approach to the evolving landscape of enterprise workloads, hyperscale innovation, and the growing influence of AI.
Traditional compute applications are also adapting and adding elements of AI into their application environment, he said.
“In a wide array of apps from Microsoft, Oracle, SAP, we see them adding AI-enhanced tools such as recommendation engines, chatbots, into their application,” he said. “While massive AI models are indeed a significant step…the vast majority of real world applications still are more evolutionary and focused on general compute.”
This dual focus allows AMD to serve diverse customer needs, ensuring that cutting-edge AI capabilities don’t overshadow the ongoing importance of reliable, efficient traditional computing.
A Portfolio Built for Versatility
AMD's diverse portfolio spans CPUs, GPUs, AI NICs, and more, offering flexibility for a wide range of customer requirements. Kuppuswamy described this strategy as “letting customer needs guide the discussion,” highlighting how AMD supports everything from cost-effective solutions to high-performance configurations.
For workloads requiring heavy training or models exceeding 13 billion parameters, AMD’s CPU-GPU combinations, such as the recently launched MI300 series, provide the scalability and efficiency necessary for advanced AI applications. This approach ensures that customers can select solutions tailored to their specific operational goals and budgets.
Hyperscale Design and Energy Efficiency
During the OCP Summit, hyperscale configurations took center stage. Kuppuswamy explained how AMD collaborates with customers to design systems optimized for evolving data center demands. The focus on energy-efficient design is critical, as global technology-related energy consumption rises in tandem with increasing data generation.
AMD’s commitment to open standards plays a significant role in these efforts. By embracing interoperability, AMD fosters innovation that benefits hyperscalers as well as enterprises looking to leverage cutting-edge technology without proprietary limitations.
The Enterprise and Cloud Continuum
Enterprises are increasingly adopting hybrid models that combine on-premises and cloud computing. Kuppuswamy highlighted how AMD technologies enable customers to build robust on-premises infrastructures while seamlessly scaling to the cloud when demand spikes.
This flexibility is especially valuable for enterprises that lack the resources of hyperscalers.
Impact and Future Vision
AMD’s leadership in the data center market has grown significantly, with a remarkable rise in market share from less than 1% to 34% in recent years. This growth underscores the appeal of AMD’s energy-efficient solutions and customer-first approach.
Looking ahead to 2025, Kuppuswamy anticipates a wave of IT infrastructure upgrades driven by outdated systems nearing the end of their lifecycles. He highlighted the dramatic efficiency improvements offered by the latest generation of AMD EPYC processors: replacing 1,000 four-year-old CPUs with just 131 new-generation processors delivers the same workload performance, with significantly reduced power and space requirements.
Collaboration and Open Standards
One of the most surprising announcements at the OCP Summit was the launch of the x86 Ecosystem Advisory Group, a collaboration between AMD and its key competitors. The initiative aims to establish common standards for compatibility and interoperability, reflecting the company’s commitment to open ecosystems.
So what’s the TechArena take? As data becomes increasingly distributed across edge and cloud environments, AMD solutions empower customers to extract value from this continuum. From the high-performance EPYC 9000 series for data centers to Ryzen-powered endpoints, AMD offers a comprehensive portfolio designed for efficiency and scalability. This adaptability is critical in a world where businesses and consumers demand instant access to data and services.
Tune in to our Data Insights podcast with Kuppuswamy. For those seeking more insights into AMD data center technologies, Kuppuswamy encouraged audiences to explore resources on their website and social media platforms.

Ocient is disrupting the data warehousing space by offering a unified data platform that optimizes for always-on, compute-intensive data and AI workloads. This hyperscale enterprise data warehouse platform enables swift transformation and analysis of petabyte-scale data at speeds 10- to 50 times faster than competitor solutions - at a disruptively better price.
I recently had the pleasure of learning more about Ocient from Vice President of Marketing Jenna Boller Chorn during the Open Compute Project (OCP) Summit. Jenna joined Jeniece Wnorowski of Solidigm and me to discuss Ocient’s pioneering data warehousing technology and its impact on the industry.
Ocient’s platform consolidates various capabilities into a single system, eliminating the need for data movement between disparate platforms.
“By bringing more capabilities directly to their data in one platform, our customers typically realize a 50 to 90% savings in cost, system footprint, and energy consumption,” Jenna said.
Ocient’s unique architecture centers on a concept known as Compute Adjacent Storage Architecture (CASA), which tightly integrates compute and storage layers. Ocient’s technology leverages Solidigm’s NVMe SSDs instead of traditional hard drives, providing the speed and performance essential for real-time data management and analysis.
Jenna emphasized Ocient’s commitment to innovation at the software level for higher performance and lower operational costs.
“We’re really focused on maximizing that efficiency benefit, having a really tight data layer at the foundation,” Jenna said.
As companies prepare for increasingly data-driven and AI-intensive workloads, Ocient’s technology is built to help manage costs and performance at scale. Jenna noted that traditional data warehousing models have often focused on elasticity and convenience, allowing users to spin up new environments quickly. This approach can lead to unpredictable costs.
“We’re starting to see with customers that operate on a very always-on basis, the cost quickly gets out of control and actually becomes very unpredictable for them,” Jenna observed. “It’s not uncommon for me to talk to customers who are aggressively deleting data and introducing constraints on their environment to manage costs.”
Ocient’s technology allows companies to control these expenses by streamlining data processing, preparation, and exploration stages directly on the platform.
“As customers need to do more, particularly in the age of AI, they’re going to need to be more efficient at that core foundational layer,” Jenna noted.
Ocient’s approach reduces the need to transfer data between systems, which minimizes security risks and operational overhead.
The company also supports AI initiatives by incorporating in-database machine learning capabilities, allowing clients to train and deploy models directly within Ocient’s platform.
“We launched Ocient ML last year, bringing ML directly to the data in Ocient,” Jenna said. This functionality enables clients to explore, prepare, and process data with fewer resources and less time, making the pipeline for predictive AI and general AI more streamlined and cost-effective.
Customer satisfaction is a high priority for Ocient, especially as clients tackle high-compute tasks requiring seamless integration with existing systems. Jenna described the role of Ocient’s Customer Solutions and Workload Services Team, which helps clients manage data pipelines and identify ways to optimize processing and pre-processing tasks. This hands-on support ensures that Ocient’s clients can realize the platform's benefits from day one, which has led to a high retention rate among their users.
“By the time they go with Ocient, they've already seen everything working, and they're realizing value from day one,” Jenna highlighted. “That drives incredible stickiness with our customers.”
Ocient’s SSD-exclusive architecture means that clients can avoid energy-intensive hard disk drives while still achieving the performance levels they require.
“From our first day, we’ve always optimized for performance at scale and for efficiency,” Jenna said. By showing clients the comparative footprint of legacy systems versus Ocient’s solution, Ocient often reveals a 50 to 90 percent reduction in operational costs and energy consumption.
Jenna also expressed hope for more transparency around the energy consumption of software applications, particularly as AI applications increase in popularity.
So what’s the TechArena take? Hyperscale data environments are projected to account for over 50% of global data center capacity by 2026, so, in short, Ocient is in the catbird seat. Their technology is well-positioned to serve organizations requiring high-efficiency, large-scale data solutions.
With continued innovations, strategic partnerships, and a firm commitment to efficiency and sustainability, Ocient is leading a transformative shift in how organizations manage and leverage their data. And their commitment to sustainable practices help set it apart in a rapidly changing industry.
Interested in learning more? Listen to the full podcast and visit Ocient.com or connect with them on LinkedIn.

As hyperscalers, supercomputing operators, and advanced enterprises globally rush to modernize and expand operations, CoolIT has emerged as a key partner for data center cooling – providing efficient, reliable solutions that have been proven in the market for 20 years.
Liquid cooling – once the target of skepticism across a data center industry that balked at its complexity and didn’t need it yet – has exploded on the data center scene as one of the key enablers of heavy AI workloads. During the OCP Summit 2024, new liquid cooling entrants could be seen in every direction. And what’s more, established players in other parts of data center infrastructure have begun planting liquid cooling flags.
It’s more than apparent that the opportunity is gargantuan. But there are few liquid cooling players that come to this data center modernization party with 20 years of expertise. CoolIT is one of them.
I thoroughly enjoyed the opportunity to sit down during OCP Summit with Charles Robison, Director of Marketing for CoolIT Systems, to learn more about this critical player in the AI - data center landscape.
From Chips to Data Centers
CoolIT’s liquid cooling tech has its roots in making cooling chips for high-performance gaming platforms. This foundational knowledge of chip-level cooling enabled the company to quickly pivot when data center demand began to skyrocket.
Today, CoolIT’s offerings include cold plates and advanced cooling loops designed to support Original Equipment Manufacturers (OEMs) and Original Design Manufacturers (ODMs).
But as Charles highlighted, its core offering is cold plate technology, which targets hotspots on chips, delivering focused cooling where it’s needed most, ensuring systems operate optimally even under high workloads.
Why the Focus on Direct Liquid Cooling
In the liquid cooling industry, various methods are available, including immersion cooling and rear-door heat exchangers. CoolIT focuses on single-phase direct liquid cooling (DLC) because it is proven, reliable, and scalable. With liquid cooling technologies tested across generations of servers from brands like HPE and Dell, CoolIT’s cold plates stand out for their reliability. Single-phase DLC is particularly effective for handling high thermal design power (TDP) in modern chips, including hot spots that require targeted cooling. By focusing on DLC, CoolIT addresses both the demand for scalable solutions and the ability to cool today’s high-powered chips.
While CoolIT recognizes the value of other cooling approaches, such as immersion cooling, DLC remains the most feasible option for wide-scale deployment. As Robinson explained, “Our technology has been through multiple generations… it’s a proven technology and a scalable technology.” This commitment to proven solutions ensures data center operators have reliable and consistent performance, a necessity in an industry where operational continuity is crucial.
CoolIT’s reputation as an end-to-end provider has been bolstered by their ability to cover every aspect of data center cooling. They supply a full suite of products, from cold plate loops that deliver direct cooling to chips, to Coolant Distribution Units (CDUs) that manage the overall cooling flow. This comprehensive approach ensures that customers receive a streamlined solution, custom-tailored to their needs – making CoolIT’s systems a reliable choice in a high-stakes environment where consistency is paramount.
Charles aptly pointed out that when Jensen Huang of NVIDIA announced liquid cooling as the future of data centers, it marked a turning point for the liquid cooling segment. Once confined to high-performance computing (HPC) and academia, it is now erupting onto the data center infrastructure scene as a solution that significantly enhances energy efficiency.
“Liquid cooling has crossed the chasm; it’s now a mainstream approach,” Charles said.
With their single-phase direct liquid cooling, data centers can handle AI-driven workloads that push conventional cooling to its limits.
Quality. Service. Support.
“Quality, as they say, is job one,” Charles said, explaining CoolIT’s top priorities.
Reliability is at the center of CoolIT’s approach. The cooling systems they produce are meticulously designed, using technologies like friction stir welding to create a single, molecular-level fusion of cold plates. This construction minimizes the potential for leaks, ensuring longevity and reliability. CoolIT employs rigorous quality control, from inspecting incoming parts to conducting end-of-line testing. Every system is tested before it leaves the factory, guaranteeing that clients, including major server manufacturers like Dell and HPE, receive a flawless product.
Beyond the product itself, CoolIT offers a comprehensive service network covering more than 70 countries, ensuring seamless deployment and ongoing support. CoolIT assists customers at every stage, from design consultation to installation and commissioning, making it easy for operators to integrate liquid cooling into their existing data centers.
“So you want to figure out how to design?” Charles said. “We'll help you with that. Would you like to install it? Yep. We've got an installation team that will come in. And we'll put together a secondary fluid network if you need…We'll help you figure that out. We'll commission it. So we'll put the fluid in and we'll actually get it running.”
Scaling for Growth
Looking into 2025 and beyond, the demand for liquid cooling will only increase as data centers handle denser and more complex workloads. CoolIT has invested significantly in expanding their production capabilities, scaling up by 25 times to meet the growing demand. This capacity enables CoolIT to handle both brownfield deployments – where liquid-to-air CDUs can be introduced to existing data centers – and greenfield projects that require a more comprehensive cooling solution from the ground up.
The scale of CoolIT’s investment in production reflects the growth potential they see in the market. As Robinson noted, “We certainly see just a massive deployment of liquid cooling… we have multi-gigawatt manufacturing capacity within our shop.” This preparation positions CoolIT as a capable partner for any data center operator looking to future-proof their infrastructure against the demands of tomorrow’s computing workloads.
Educating the Industry and Pioneering Change
To ease the industry’s shift toward liquid cooling, CoolIT emphasizes education and industry collaboration. As a founding member of the Liquid Cooling Coalition, CoolIT is committed to informing operators, policymakers, and the broader industry about the benefits and applications of liquid cooling. Through these initiatives, CoolIT hopes to normalize the adoption of liquid cooling, fostering an industry-wide shift toward more efficient, sustainable practices.
So what’s the TechArena take? I’m so grateful to Solidigm and our Data Insights series for opening the door to this delightful discussion. As for CoolIT, in short, they are rocking liquid cooling. Their solutions have proven themselves in one of the most demanding environments — data centers powering AI. And while this space is nascent and has more promise than volume deployments, as we head into the second half of the decade, we at TechArena are expecting a hockey stick ramp for this segment of the industry. We also are keen to see how CoolIT leverages this massive opportunity towards financial returns. Listen to the full podcast here.

I was lucky to catch up with Eddie Ramirez, VP of Marketing for Arm’s infrastructure business, at the recent OCP Summit. Eddie was last on the show at last OCP Summit talking about Arm’s focus on development of a data center ecosystem, and I was keen to learn about the progress the company had made in this arena. Arm’s advancements in data center technology are making a mark on innovative data center infrastructure with a focus on efficiency, chiplet innovation, scalable solution design.
During the recent OCP Summit 2024, Data Insights podcast co-host Jeneice Wnorowski of Solidigm and I had the pleasure of welcoming Eddie back to the TechArena to better understand the company’s impact across the industry. Arm’s big announcement this year at OCP Summit centered around the power of chiplets to accelerate silicon design. Chiplet technology enables multiple processing units to be combined in a single package, streamlining custom chip design. Arm’s Total Design program enables partners to adopt their cores efficiently, with configurations that cater to diverse needs, from general-purpose tasks to specialized AI processing. This modular integration approach enables flexibility, supporting efficient scaling for data centers that need adaptable configurations for different workloads.
Eight different partners within Arm’s Total Design program announced chiplet projects that they've kicked off, ranging from 16-core to 64-core setups that can be used in a variety of products. One partnership in particular brings together Samsung Foundry, a Korean ASIC design partner, ADTechnology, and Rebellions AI - a startup delivering TPU accelerators. Through this collaboration, Arm has demonstrated how its program helps deliver an integrated design that enables 3X greater performance efficiency than conventional GPU-based solutions – underscoring the power of best in breed chiplet solutions’ role in data center applications. When seeing where chiplet design is going with Arm, it comes as no surprise that this was a focus of OCP Summit, land of the hyperscalers. Arm cores have gained traction among the major players – AWS, Microsoft and Google – which all now integrate the technology in their home-grown designs – utilizing them for internal workloads as well as customer instances.
It's been in Arm’s DNA to provide compute efficient architectures. Their design delivers up to 60% higher power efficiency than x86 servers, allowing cloud providers to reduce power consumption and total cost of ownership (TCO) while achieving sustainability goals. This energy-saving approach is the key to Arm’s success with the hyperscalers, Eddie said, providing them a huge benefit and positioning Arm as an optimal choice for large-scale workloads.
With the rise of AI, the need for GPUs is amplified, especially to train large-scale models. However, CPUs remain essential, particularly for the inference stage, where AI models process data and provide real-time predictions. Unlike training, which demands high power, inference tasks can be handled efficiently by CPUs. Arm-based processors offer a cost-effective solution, balancing performance with reduced energy consumption.
Arm’s reach extends beyond computing into networking and storage within the data center. Arm cores are now embedded in top-of-rack switches, data processing units (DPUs), and baseboard management controllers (BMCs), enhancing efficiency in high-speed data transmission and storage. By deploying ARM cores across types of infrastructure, data centers achieve better resource management and power optimization, aligning with performance demands from AI workloads. This integrated approach allows data centers to streamline operations and enhance energy efficiency at every level.
Arm’s Neoverse platform - the company's infrastructure-focused product line – includes high-performance cores and interconnect IPs for data centers and edge environments. Neoverse’s adaptable architecture enables Arm partners to integrate the latest technology and expand it with additional I/O or storage features.
V3 of the Neoverse platform enhances Arm-based systems’ performance and flexibility, making them suitable for AI and data processing applications. This scalable approach enables data centers to meet growing performance needs without compromising power efficiency.
So what’s the TechArena take? I love chiplets and love what Arm is doing with an ecosystem. This design innovation makes sense for a wide array of use cases, and Arm’s foundation will help the industry move further, faster. Arm’s commitment to energy efficiency, modularity, and open collaboration also aligns well to Open Compute Project tenets, transforming data center infrastructure and offering true differentiation in a crowded field. Through programs like Total Design and platforms like Neoverse, Arm is responsibly building efficient and scalable solutions that meet the demands of AI, cloud, and edge applications.
There is a lot of disruption in the compute landscape with AI acceleration taking center stage. I see two paths of opportunity for Arm…one as a “head node” alternative to x86 with noted energy efficiency advantages, the other as a chiplet core with integration of TPU or other acceleration chiplets as alternative to GPU. Both are exciting to see gain traction in the market, and we’ll keep watching this space for more.
Listen to the full podcast here.

Join Allyson Klein and Jeniece Wnorowski in this episode of Data Insights as they discuss key takeaways from the 2024 OCP Summit with Scott Shadley, focusing on AI advancements and storage innovations.
.webp)
In this episode of Data Insights by Solidigm, Ravi Kuppuswamy of AMD unpacks the company’s innovations in data center computing and how they adapt to AI demands while supporting traditional workloads.