Loading…
Attending this event?
October 22-23, 2024
Santa Clara, CA
View More Details & Registration

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for RISC-V Summit to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.
Grand Ballroom G (Level 1) clear filter
Monday, October 21
 

9:00am PDT

RISC-V 101
Monday October 21, 2024 9:00am - 11:00am PDT
Come hear about the What, How, and Why of RISC-V.

This session is perfect for anyone curious about RISC-V or looking to deepen their understanding and engagement.

Topics covered include What is and Why RISC-V?, Software & RISC-V, and How to Get Involved and Engage in RISC-V.

Learn More

*Separate registration is required.
Monday October 21, 2024 9:00am - 11:00am PDT
Grand Ballroom G (Level 1)

11:30am PDT

Hackathon
Monday October 21, 2024 11:30am - 6:00pm PDT
Participate in a hands-on hackathon and gain access to tooling and mentorship!

Learn More

*Separate registration is required.
Monday October 21, 2024 11:30am - 6:00pm PDT
Grand Ballroom G (Level 1)
 
Tuesday, October 22
 

11:30am PDT

Say Goodbye to Fear, Uncertainty, and Doubt: Innovate with Codasip Studio Fusion - Keith Graham, Codasip
Tuesday October 22, 2024 11:30am - 11:48am PDT
Today’s Artificial Intelligence (AI) companies and products are at the forefront of innovation, unlocking new markets and tackling the toughest technological challenges of the future. Innovation isn’t just a buzzword; it’s the gateway to new revenue streams and higher profits. At the heart of this innovation lies the need for new architectures that push the limits of performance while slashing costs and power consumption. This is where Custom Compute comes in – transforming these groundbreaking ideas into reality. But even the most advanced tech isn’t enough if it's not the right fit. To launch game-changing products that drive growth and maximize profits, they must be developed quickly and with confidence. That’s where Codasip Studio Fusion comes in – making Custom Compute the ultimate choice by eliminating Fear, Uncertainty, and Doubt, so you can innovate boldly and lead the market.
Speakers
avatar for Keith Graham

Keith Graham

VP of University Program, Codasip
Over my thirty-nine-year career, I've gone from designing workstations, developing multi-processor cache and memory management units, selling semiconductors, small business owner, senior instructor teaching embedded systems and computer architecture, to leading Codasip's University... Read More →
Tuesday October 22, 2024 11:30am - 11:48am PDT
Grand Ballroom G (Level 1)

11:50am PDT

The Benefits of Building New AI Accelerators with RISC-V - Cliff Young & Martin Maas, Google DeepMind
Tuesday October 22, 2024 11:50am - 12:28pm PDT
There has been huge interest in building accelerators for AI in the decade since AlexNet ushered in the current deep learning revolution. Billions of dollars in capital have been committed, and many ambitious projects have been launched, across established manufacturers, hyperscalers, and startups. In this talk, we will reflect on our experiences at Google designing and deploying successful accelerators and the different ways that subtle challenges make effective acceleration hard. RISC-V potentially helps with these challenges, while lowering barriers to entry, reducing risks, and sharing the benefit of expertise and experience. We will make connections between our experiences and how RISC-V accelerates accelerator development itself, highlighting how the shared work on a RISC-V ecosystem for deep learning acceleration can be positive-sum, benefiting all who participate.
Speakers
avatar for Martin Maas

Martin Maas

Staff Research Scientist, Google DeepMind
Martin Maas is a Staff Research Scientist at Google DeepMind. His research interests are in language runtimes, computer architecture, systems, and machine learning, with a focus on applying ML to systems problems. He also chairs the RISC-V J Extension Task Group, which investigates... Read More →
avatar for Cliff Young

Cliff Young

Software Engineer, Google DeepMind
Cliff Young is a software engineer in Google DeepMind, where he works on codesign for deep learning accelerators. He is one of the designers of Google’s Tensor Processing Unit (TPU) and one of the founders of the MLPerf benchmark. Previously, Cliff built special-purpose supercomputers... Read More →
Tuesday October 22, 2024 11:50am - 12:28pm PDT
Grand Ballroom G (Level 1)
  AI / ML

1:55pm PDT

Lessons Learned in Using RISC-V for Generative AI and Where We Can Go from Here - Jin Kim, Esperanto Technologies
Tuesday October 22, 2024 1:55pm - 2:13pm PDT
The size of the Foundation models behind the Generative AI revolution have grown at a rate of more than 400x every 2 years, while DRAM memory capacity has been increasing only at 2x every two years, leading to what is commonly called the “memory wall”. Similarly, while the required throughput rate of LLMs making up the Foundation models has been increasing at 10x per year, the increase in computational capability of GPUs has been at a pace of only 10x in 4 years, leading to what is commonly called the “compute wall”. These trends have raised a new set of challenges in how to economically train these models, cost-effectively run them, and manage the tremendous increase in electrical power. The first contribution of this session are lessons learned in leveraging hardware and software developed for traditional AI workloads and how it was extended to support Generative AI. The session’s next main contribution is how we are applying lessons learned from our first-generation technology to our next generation. In this session’s final contribution, we will also discuss how the RISC-V ISA could be extended in ways that would make it more efficient and compelling at running Generative AI.
Speakers
avatar for Jin Kim

Jin Kim

Chief Data Science Officer, Esperanto Technologies
An executive, entrepreneur, and data scientist, Jin’s experience spans enterprise software products and services in AI, big data, and advanced analytics. He has led multinational engineering teams at both established and startup companies, including GraphSQL, Wave Computing, Objectivity... Read More →
Tuesday October 22, 2024 1:55pm - 2:13pm PDT
Grand Ballroom G (Level 1)
  AI / ML

2:15pm PDT

Building Tool Chains for RISC-V AI Accelerators - Jeremy Bennett, Embecosm
Tuesday October 22, 2024 2:15pm - 2:33pm PDT
Our client is developing a massively parallel 64-bit chip for AI inference workloads. To facilitate early software development, we are bringing up an AI tool flow for this chip in a QEMU RISC-V environment. In this talk, we'll share our experience of getting three key AI frameworks working with RISC-V QEMU: Pytorch, Tensorflow and the OpenXLA compiler. Our talk will share our experience addressing two key issues. We will describe the challenges we faced, their solutions and reflect on the lessons learned for future work. The first of these is simply getting the tools to effectively run in an emulated RISC-V environment. These tools are large, fast moving pieces of software with extensive external dependencies. Our second challenge is performance. AI workloads are inherently parallel, and hence run efficiently on vector enabled hardware. However RISC-V vector (RVV) is relatively new, and we experienced difficulty getting the performance we expected out of the tool flow. At the end of this talk, we hope our audience will have a better understanding of the challenges in bringing up an AI tool flow under QEMU. We hope out experience will help them bring up their own AI tool flows.
Speakers
avatar for Jeremy Bennett

Jeremy Bennett

Chief Executive, Embecosm
Bio: Dr Jeremy Bennett is founder and Chief Executive of Embecosm(http://www.embecosm.com), a consultancy implementing open sourcecompilers, chip simulators and AI/ML for major corporations around the world.He is a author of the standard textbook "Introduction to CompilingTechniques... Read More →
Tuesday October 22, 2024 2:15pm - 2:33pm PDT
Grand Ballroom G (Level 1)
  AI / ML

2:35pm PDT

LLM Inference on RISC-V Embedded CPUs - Yueh-Feng Lee, Andes Technology
Tuesday October 22, 2024 2:35pm - 2:53pm PDT
The advancement of large language models (LLMs) has significantly enhanced natural language processing capabilities, enabling complex text understanding and generation tasks. This presentation focuses on optimizing the open-source LLaMA CPP project for the RISC-V P extension. By running the TinyLLaMA 1.1B model on the Andes Voyager development board using a quad-core CPU supporting the RISC-V P extension, performance results show that the model can achieve near real-time response. This work highlights the potential of RISC-V as an efficient platform for deploying advanced AI models in resource-constrained environments, contributing to the growing field of edge computing and embedded AI applications.
Speakers
avatar for Yueh-Feng Lee

Yueh-Feng Lee

Manager, Andes Technology
Yueh-Feng Lee received his Ph.D. degree in computer science from National Chiao Tung University. He previously worked at Mediatek and Industrial Technology Research Institute. His areas of focus include AI compiler and runtime, hypervisor technology, and embedded systems.
Tuesday October 22, 2024 2:35pm - 2:53pm PDT
Grand Ballroom G (Level 1)
  AI / ML

2:55pm PDT

Bridging the Gap: Compiling and Optimizing Triton Kernels Onto RISC-V Targets Based on MLIR - Aries Wu, Terapines Technology Co., Ltd.
Tuesday October 22, 2024 2:55pm - 3:33pm PDT
This deep dive will explain an end to end software stack solution to RISC-V based AI chips, including an innovation way to write AI kernels with new programming languages such as Triton (and Mojo later), using MLIR/LLVM based AI compiler infra to lower Triton kernels and neural networks from frameworks such as Pytorch, ONNX, Tensorflow and JAX into a range of high/middle/low level of MLIR dialects to do coarse grained high level optimizations such as loop tiling, kernel fusion, auto-vectorization etc. This paves the way of sharing common open source Triton kernels libraries provided in PyTorch and other frameworks, and greatly reduces the adoption time for AI software stack to RISC-V based AI chip. This talk will also explore the limitation of Triton language, and how can we extend the Triton language, and also the MLIR conversion and optimization passes to better support non GPU architecture target such as RISC-V.
Speakers
avatar for Aries Wu

Aries Wu

CTO, Terapines Technology Ltd
Co-founder & CTO of Terapines Technology. More than 15 years compiler design and development experience in Andes, S3 Graphics, Imagination and Terapines. Specialized in CPU, GPU, GPGPU, AI compilers based on MLIR, LLVM and GCC.
Tuesday October 22, 2024 2:55pm - 3:33pm PDT
Grand Ballroom G (Level 1)
  AI / ML
 
Wednesday, October 23
 

8:00am PDT

Collaboration Breakfast - Sponsored by Google
Wednesday October 23, 2024 8:00am - 8:45am PDT
The Google Collaboration Breakfast will be a software-focused panel discussion with David Patterson, Lars Bergstrom and Andrea Gallo with Amber Huffman as moderator.

No pre-registration is required to attend. We do our best to accommodate everyone interested in joining, but please note that participation is on a first-come, first-served basis.
Wednesday October 23, 2024 8:00am - 8:45am PDT
Grand Ballroom G (Level 1)

11:30am PDT

SiFive Event Trace: The First Zero-Overhead Performance Tool for RISC-V Processors - Carsten Gosvig, SiFive
Wednesday October 23, 2024 11:30am - 11:48am PDT
Historically, software developers have been forced to use special compiler switches to instrument code to gather traces and performance information. This has three disadvantages: it requires recompilation of the code to include the instrumentation, it increases code size, and it affects/distorts execution timing of the program. SiFive has developed a new approach, SiFive® Event Trace. Event Trace is unique in that it provides front-end hardware filtering to selectively capture specific events as the RISC-V core executes programs in real time. No software instrumentation or recompilation is required, saving development and debug time while avoiding the overhead and timing distortion that can result from software instrumentation. SiFive Event Trace is flexible, allowing developers to choose events to capture, including calls/returns, exceptions, interrupts, context changes, watchpoints, external triggers, and more. Each trace event has a high-resolution timestamp that provides both duration and interval timing, and This session will give developers a complete overview of this innovative profiling solution and demonstrate how to configure, view, and interpret Event Traces
Speakers
avatar for Carsten Gosvig

Carsten Gosvig

Developer Tools Engineer, SiFive
Carsten Gosvig is a Developer Tools Engineer at SiFive, heading the Debug, Trace and Profiling SW effort which includes the FreedomStudio (IDE), OpenOCD (JTAG) and GDB SW stack.
Wednesday October 23, 2024 11:30am - 11:48am PDT
Grand Ballroom G (Level 1)
  Software

11:50am PDT

RISC-V LLVM State of the Union - Alex Bradbury, Igalia
Wednesday October 23, 2024 11:50am - 12:08pm PDT
The success of the RISC-V instruction set architecture depends on the ability for software to exploit the hardware effectively, both for the baseline (and now defined ISA profiles) and for new instruction set extensions. The LLVM compiler infrastructure (including Clang) is key for this, and has been a major success story for RISC-V software ecosystem enablement through cross-party collaboration. This talk provides an update on the current status, with up to date benchmarks for code size and generated code performance vs GCC. We'll explore how recent work in CI and tracking of these metrics has been helping to accelerate progress and ensure quality, and look ahead to future challenges.
Speakers
avatar for Alex Bradbury

Alex Bradbury

Compiler Engineer, Igalia
Alex Bradbury is a compiler engineer at Igalia. He has been heavily involved in the RISC-V ecosystem since its inception, working across the hardware and software stack having previously co-founded lowRISC. He initiated the upstream RISC-V LLVM backend implementation, authoring the... Read More →
Wednesday October 23, 2024 11:50am - 12:08pm PDT
Grand Ballroom G (Level 1)
  Software
  • Audience Experience Level Any

12:10pm PDT

Exploration of Productization of Android on RISC-V - Han Mao, Alibaba Damo Academy
Wednesday October 23, 2024 12:10pm - 12:28pm PDT
Since the Xuantie team promoted the integration of the RISC-V architecture within the AOSP mainline in 2022, the support for RISC-V in the Android system has become increasingly mature. This includes JIT/AOT modes support of Android Runtime, Cuttlefish emulator support, and optimization of numerous third-party libraries. Currently, the productization process of RISC-V Android is still in its early stages, with many upper-layer software stacks yet to achieve full compatibility with RISC-V. To further improve these software stacks, the Xuantie team, along with its partners, has explored productization in various customized scenarios such as payment, cloud desktops, and server clusters. This talk will share typical issues encountered during productization related to performance, stability, power consumption, and application compatibility; as well as how we addressed these issues.
Speakers
avatar for Mao Han

Mao Han

Senior Engineer, Alibaba damo academy
Mao Han is working as a Senior Engineer in Alibaba T-Head, covering RISC-V support of Android system. He has many years of experience in Android, Linux, C library and profiling tools. Since 2020, he led a project to port RISC-V architecture onto Android system, and started to served... Read More →
Wednesday October 23, 2024 12:10pm - 12:28pm PDT
Grand Ballroom G (Level 1)
  Software

1:55pm PDT

GPU Program Support on RISC-V GPU - Hyesoon Kim, Georgia Tech
Wednesday October 23, 2024 1:55pm - 2:13pm PDT
Describe the software system to support CUDA running
Speakers
avatar for Hyesoon Kim

Hyesoon Kim

Professor, Georgia Tech
Hyesoon Kim is a professor in the School of Computer Science at the Georgia Institute of Technology and a co-director of the Center for Novel Computing Hierarchy. Her research areas include the intersection of computer architectures and compilers, with an emphasis on heterogeneous... Read More →
Wednesday October 23, 2024 1:55pm - 2:13pm PDT
Grand Ballroom G (Level 1)
  Software

2:15pm PDT

Software Simulation Is the Key to Success for Customized CPUs and Complex SoCs - Jon Taylor, Synopsys
Wednesday October 23, 2024 2:15pm - 2:33pm PDT
RISC-V allows the freedom to innovate with custom instructions but working out which custom instructions add the most value is key to success and more easily done with simulation and models than RTL. At the same time new applications such as AI/ML are creating ever more complex SoCs with very high core counts. Using models in a digital twin of the design allows fast architectural exploration, accelerates software development and post silicon can help with DevOps flows and diagnosing in-field failures. This talk discusses two custom SoC projects where virtual platforms have been used to successfully develop software for many core systems in advance of silicon being available. This requires fast, accurate golden models of the CPUs in a simulation environment which can scale to hundreds or more cores.
Speakers
avatar for Jon Taylor

Jon Taylor

Senior Director of Product Management, Synopsys
Jon has over 20 years of experience in the semiconductor industry, working in technical areas from CPU verification to embedded software, and commercial areas including field applications and technology strategy. He has worked on multiple architectures including Arm, RISC-V and proprietary... Read More →
Wednesday October 23, 2024 2:15pm - 2:33pm PDT
Grand Ballroom G (Level 1)
  Software

2:35pm PDT

Porting SLEEF to RISC-V - Ludovic Henry, Rivos & Eric Love, SiFive
Wednesday October 23, 2024 2:35pm - 2:53pm PDT
Join us as we explore the journey of porting the SLEEF vectorized math library to the RISC-V architecture, focused on ensuring complete support for single, double, and quad precision math operations, Discrete Fourier Transforms (DFT), and testing all of it on QEMU on GitHub Actions.
Speakers
EL

Eric Love

Algorithms & Libraries Team, SiFive
avatar for Ludovic Henry

Ludovic Henry

Software Engineer & Lead, Rivos
I am the lead for the Managed Runtimes and System Libraries team at Rivos, a RISC-V hardware focused company. I contribute to many projects, making sure they are well supported on RISC-V. I’m also the lead for the Language Runtimes working group at RISE.
Wednesday October 23, 2024 2:35pm - 2:53pm PDT
Grand Ballroom G (Level 1)
  Software

2:55pm PDT

Aggregation Optimization for SIMD Everywhere from ARM Neon to RISC-V Vector and Crypto Extensions - Jenq-Kuen Lee & Hung-Ming Lai, National Tsing-Hua University, Taiwan
Wednesday October 23, 2024 2:55pm - 3:13pm PDT
Many libraries, such as OpenCV, FFmpeg, XNNPACK, and Eigen, utilize Arm or x86 SIMD Intrinsics to optimize programs for performance. With the emergence of RISC-V Vector Extensions (RVV), there is a need to migrate these performance legacy codes for RVV. Our prior work at RISC-V Summit 2023, USA, successfully enhanced the open-source library, SIMD Everywhere (SIMDe), to support the migration from ARM NEON to RISC-V Vector Extensions. In this talk, we will update the status of our open source upstream at SIMDe. In addition, we further explore the migration of quantum-secure encryption algorithms with the RISC-V Cryptography Extension to meet the needs of post-quantum cryptography. Through these efforts, we identify a critical issue: the translation of SIMD intrinsics often fails to utilize the wider vectors available on the target platform. To address this issue, we propose an aggregation optimization in the LLVM pass that collects short vector intrinsics to fully leverage the wider vectors provided by RISC-V vector extension. Our vector aggregation optimization further boosts performance of RVV-enhanced SIMDe from 4.350× to 11.020×.
Speakers
avatar for Jenq-Kuen Lee

Jenq-Kuen Lee

Professor, National Tsing Hua University, Taiwan
Jenq-Kuen Lee received the B.S. degree in computer science from National Taiwan University in 1984. He received the M.S. and Ph.D. degrees in 1991 and 1992, respectively, in computer science from Indiana University. He is now a professor at National Tsing-Hua University, Taiwan, where... Read More →
avatar for Hung-Ming Lai

Hung-Ming Lai

PhD Student, National Tsing-Hua University, Taiwan
Hung-Ming is a PhD student in the Department of Computer Science, National Tsing-Hua University, Taiwan. His thesis advisor is Prof. Jenq-Kuen, Lee. His research interests are in compiler optimizations on RISC-V with SIMD computations, AI compiler optimizations, and compiler analysis... Read More →
Wednesday October 23, 2024 2:55pm - 3:13pm PDT
Grand Ballroom G (Level 1)
  Software
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.