Loading…
June 23 - 25, 2025
Denver, Colorado
View More Details & Registration
Note: The schedule is subject to change.

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for Open Source Summit North America 2025 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Mountain Daylight Time (UTC/GMT -6). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date."

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.

Venue: Bluebird Ballroom 3E clear filter
arrow_back View All Dates
Tuesday, June 24
 

11:00am MDT

Open AI (Two Words): The Only Path Forward for AI - Matt White, Linux Foundation
Tuesday June 24, 2025 11:00am - 11:40am MDT
The exponential growth in artificial intelligence capabilities has been fundamentally driven by open science and collaborative research. From the publication of the "Attention Is All You Need" paper that introduced the Transformer architecture to OpenAI's strategic release of GPT-2, openness has repeatedly catalyzed breakthrough innovations while enabling crucial public discourse around AI's implications.

This talk presents a compelling case for why open source development is not just beneficial but essential for the future of safe and equitable AI. We'll examine how the open-source ecosystem has democratized access to AI technology, enabled transparency and innovation, and fostered a global community of researchers working to ensure AI systems are robust and aligned with human values.

Through concrete examples, we'll demonstrate how open-source initiatives have already begun addressing critical challenges in AI development. The Model Openness Framework has established clear standards for transparency, while the pioneering OpenMDW license has created a legal framework for responsible sharing of AI artifacts.
Speakers
avatar for Matt White

Matt White

GM of AI, Executive Director, PyTorch, Linux Foundation
Matt White is the Executive Director of the PyTorch Foundation and GM of AI at the Linux Foundation. He is also the Director of the Generative AI Commons. Matt has nearly 30 years of experience in applied research and standards in AI and data in telecom, media and gaming industries... Read More →
Tuesday June 24, 2025 11:00am - 11:40am MDT
Bluebird Ballroom 3E
  Open AI + Data

11:55am MDT

The Responsible Generative AI Framework Pathways: Where Do We Go From Here? - Ofer Hermoni, iForAI & Oita Coleman, Open Voice TrustMark Initiative
Tuesday June 24, 2025 11:55am - 12:35pm MDT
The Responsible Generative AI Framework (RGAF) lays the foundation for ethical and transparent AI development, but what comes next? This panel will explore the Responsible AI Pathways, a set of strategic directions designed to move from framework to implementation.

Panelists will discuss the four key pathways shaping the future of responsible AI:
• Big-Picture Alignment – Understanding AI’s role in humanity’s future and aligning LF AI & Data initiatives with ethical AI progress.
• Ecosystem Mapping – Identifying gaps, overlaps, and collaboration opportunities within the global Responsible AI landscape.
• Deep Dive into Core Dimensions – Addressing AI safety, security, sustainability, and other critical aspects for responsible development.
• Practical Implementation – Grounding principles in real-world use cases, industry applications, and open-source tooling.
This session will also highlight the role of AI safety and security in responsible AI adoption and provide attendees with insights, strategies, and next steps to ensure AI innovation remains transparent, accountable, and trustworthy.
Join us for a forward-thinking discussion on how to shape the future of responsible Gen AI.
Speakers
avatar for Ofer Hermoni

Ofer Hermoni

Founder, Chief AI Officer, iForAI
Dr. Ofer Hermoni is a visionary AI leader with a Ph.D. in Computer Science and 60+ patents in AI, security, networking, and blockchain. He co-founded the Linux Foundation AI and served as its inaugural technical chair, shaping the global AI ecosystem. A two-time startup founder, he... Read More →
avatar for Oita Coleman

Oita Coleman

Project Lead / Senior Advisor, Open Voice TrustMark Initiative
Oita Coleman is the Project Lead/Senior Advisor at the Open Voice TrustMark Initiative, a global Linux Foundation project dedicated to educating and advocating for open standards and best practices for conversational AI technologies. In her role, she is responsible for developing... Read More →
Tuesday June 24, 2025 11:55am - 12:35pm MDT
Bluebird Ballroom 3E
  Open AI + Data

2:10pm MDT

Lightning Talk: Streaming and Processing Edge Vision Data in Real Time - Joyce Lin, Viam
Tuesday June 24, 2025 2:10pm - 2:20pm MDT
Edge-based computer vision gives us real-time insights, but getting that data where it needs to go without high bandwidth, lag, or hardware strain is a big challenge. Learn how to build a fast, event-driven vision pipeline using WebRTC for real-time streaming and gRPC for lightweight commands. Whether for security cameras or IoT, you'll gain a practical blueprint for a scalable, open-source vision system to stay responsive at the edge while being cost-effective, adaptable, and cloud-independent.
Speakers
avatar for Joyce Lin

Joyce Lin

Head of developer relations, Viam
Joyce Lin is the head of developer relations at Viam, a robotics platform that connects software with smart machines in the physical world. Based in San Francisco, she is also a Tiktok influencer, dog mom, cat mom, and writer.
Tuesday June 24, 2025 2:10pm - 2:20pm MDT
Bluebird Ballroom 3E
  Open AI + Data

2:25pm MDT

Lightning Talk: Serving Guardrail Detectors on Vllm - Evaline Ju, IBM
Tuesday June 24, 2025 2:25pm - 2:35pm MDT
With the increase in generative AI model use, there is a growing concern of how models can divulge information or generate inappropriate content. This concern is leading to the development of technologies to “guardrail” user interactions with models. Some of these guardrails models are simple classification models, while others like IBM’s Granite Guardian or Meta’s Llama Guard are themselves generative models, able to identify multiple risks. As new models appear, a variety of large language model serving solutions are being developed and optimized. An open-sourced example, vllm, has become an increasingly popular serving engine.

In this talk I’ll discuss how we built an open-sourced adapter on top of vllm to serve an API for guardrails models, so that models like Granite Guardian and Llama Guard can be easily applied as guardrails in generative AI workflows.
Speakers
avatar for Evaline Ju

Evaline Ju

Senior Software Engineer, IBM
Evaline is a senior engineer working on the watsonx platform engineering team of IBM Research and based in Denver, Colorado. She currently focuses on building guardrails infrastructure for large language model workflows. Her previous experience includes MLOps for IBM’s cloud ML... Read More →
Tuesday June 24, 2025 2:25pm - 2:35pm MDT
Bluebird Ballroom 3E
  Open AI + Data

2:40pm MDT

Lightning Talk: Future-Proofing Compliance: Leveraging Knowledge Graphs and AI in Cybersecurity - Zeyno Dodd, Conjectura R&D
Tuesday June 24, 2025 2:40pm - 2:50pm MDT
Traditional approaches to cybersecurity compliance are being redefined in an era marked by rapidly evolving cybersecurity threats and stringent compliance requirements. This session explores the innovative integration of Knowledge Graphs (KG) and Retrieval Augmented Generation (RAG) with Generative AI to address the ever-evolving complexities of cybersecurity frameworks like NIST CSF v2.0, NIST 800-171, and CMMC. I will briefly delve into an open-source proof-of-concept demonstrating how these technologies can automate the discovery of compliance relationships and streamline cross-framework assessments. Join me in discovering how we can significantly enhance cybersecurity measures by harnessing open-source tools and AI, reducing the resource burden, and maintaining timely and robust adherence to evolving standards.
Speakers
avatar for Zeyno Dodd

Zeyno Dodd

R&D Architect, Conjectura R&D
Cloud Solution Architect and Researcher with 25+ years in software development and research. Committed to leveraging AI to address complex real-world challenges with societal impact. Specializes in applying Graph Neural Networks (GNN) within Cloud/Edge/Hybrid Machine Learning frameworks... Read More →
Tuesday June 24, 2025 2:40pm - 2:50pm MDT
Bluebird Ballroom 3E
  Open AI + Data
  • Audience Experience Level Any

3:05pm MDT

Universal AI: Execute Your Models Where Your Data (And Users) Are - David Aronchick, Expanso
Tuesday June 24, 2025 3:05pm - 3:45pm MDT
Data is exploding across distributed locations, but centralized processing is increasingly unsustainable. This talk explores "compute over data" architectures that bring ML to your data, unlocking new possibilities through real-world examples.
Speakers
avatar for David Aronchick

David Aronchick

CEO, Expanso
David Aronchick is CEO of Expanso, the distributed computing company built on Bacalhau ([https://bacalhau.org](https://bacalhau.org/)). Previously, he led Compute over Data at Protocol Labs, Open Source Machine Learning Strategy at Azure, was a product management for Kubernetes... Read More →
Tuesday June 24, 2025 3:05pm - 3:45pm MDT
Bluebird Ballroom 3E
  Open AI + Data

4:20pm MDT

Gotta Cache 'em All: Scaling AI Workloads With Model Caching in a Hybrid Cloud - Rituraj Singh & Jin Dong, Bloomberg
Tuesday June 24, 2025 4:20pm - 5:00pm MDT
AI models are evolving rapidly, while also growing exponentially in size and complexity. As AI workloads become larger, it is crucial to address the challenges of rapidly scaling inference services during peak hours and how to ensure optimal GPU utilization for fine-tuning workloads. To tackle this, Bloomberg’s Data Science Platform team has implemented a “Model Cache” feature in the open source KServe project for caching large models on GPUs in a multi-cloud and multi-cluster cloud-native environment.

This talk discusses the challenges faced with hosting large models for inference and fine-tuning purposes, and how model caching can help mitigate some of these challenges by reducing load times during auto-scaling of services, improving resource utilization, and boosting data scientists’ productivity. The talk dives into how Bloomberg integrated KServe’s Model Cache into its AI workloads and built an API on top of Karmada to manage cache federation. AI infrastructure engineers will learn about the profound impact of enabling model caching and how teams can adopt this feature in their own AI infrastructure environment.
Speakers
avatar for Rituraj Singh

Rituraj Singh

Software Engineer, Bloomberg LP
Rituraj Singh is a software engineer on Bloomberg’s Data Science Platform engineering team, which is focused on enabling large-scale AI model training on GPUs. Rituraj graduated from Carnegie Mellon University with a master's degree in computer engineering.
avatar for Jin Dong

Jin Dong

Software Engineer, Bloomberg
Jin Dong is a software engineer at Bloomberg. He works on building an inference platform for machine learning with KServe.
Tuesday June 24, 2025 4:20pm - 5:00pm MDT
Bluebird Ballroom 3E
  Open AI + Data
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Experience Level
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -