Artwork

Conteúdo fornecido por tinyML Foundation and TinyML Foundation. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por tinyML Foundation and TinyML Foundation ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

tinyML Talks - Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google

59:48
 
Compartilhar
 

Manage episode 441634742 series 3574631
Conteúdo fornecido por tinyML Foundation and TinyML Foundation. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por tinyML Foundation and TinyML Foundation ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

Curious about how you can run a colossal 405 billion parameter model on a device with a mere 2 billion footprint? Join us with Mahesh Yadav from Google, as he shares his journey from developing small devices to working with massive language models. Mahesh reveals the groundbreaking possibilities of operating large models on minimal hardware, making internet-free, edge AI a reality even on devices as small as a smartwatch. This eye-opening discussion is packed with insights into the future of AI and edge computing that you don't want to miss.
Explore the strategic shifts by tech giants in the language model arena with Mahesh and our hosts. We dissect Microsoft's investment in OpenAI’s Phi model and Google's development of Gamma, exploring how increasing the parameters in large language models leads to emergent behaviors like logical reasoning and translation. Delving into the technical and financial implications of these advancements, we also address privacy concerns and the critical need for cost-effective model optimization in enterprise environments handling sensitive data.
Advancements in edge AI training take center stage as Mahesh unpacks the latest techniques for model size reduction. Learn about synthetic data generation and the use of quantization, pruning, and distillation to shrink models without losing accuracy. Mahesh also highlights practical applications of small language models in enterprise settings, from contract management to sentiment analysis, and discusses the challenges of deploying these models on edge devices. Tune in to discover cutting-edge strategies for model compression and adaptation, and how startups are leveraging base models with specialized adapters to revolutionize the AI landscape.

Learn more about the tinyML Foundation - tinyml.org

  continue reading

Capítulos

1. tinyML Talks - Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google (00:00:00)

2. Edge AI Development and Challenges (00:00:37)

3. Edge AI With Small Language Models (00:13:43)

4. Advancements in Edge AI Training (00:22:53)

5. Techniques for Model Size Reduction (00:27:15)

6. Applications of Small Language Models (00:37:40)

7. Discussion on NVIDIA, ONNX, and Acceleration (00:41:05)

8. Model Compression and Adaptation Techniques (00:53:57)

6 episódios

Artwork
iconCompartilhar
 
Manage episode 441634742 series 3574631
Conteúdo fornecido por tinyML Foundation and TinyML Foundation. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por tinyML Foundation and TinyML Foundation ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

Curious about how you can run a colossal 405 billion parameter model on a device with a mere 2 billion footprint? Join us with Mahesh Yadav from Google, as he shares his journey from developing small devices to working with massive language models. Mahesh reveals the groundbreaking possibilities of operating large models on minimal hardware, making internet-free, edge AI a reality even on devices as small as a smartwatch. This eye-opening discussion is packed with insights into the future of AI and edge computing that you don't want to miss.
Explore the strategic shifts by tech giants in the language model arena with Mahesh and our hosts. We dissect Microsoft's investment in OpenAI’s Phi model and Google's development of Gamma, exploring how increasing the parameters in large language models leads to emergent behaviors like logical reasoning and translation. Delving into the technical and financial implications of these advancements, we also address privacy concerns and the critical need for cost-effective model optimization in enterprise environments handling sensitive data.
Advancements in edge AI training take center stage as Mahesh unpacks the latest techniques for model size reduction. Learn about synthetic data generation and the use of quantization, pruning, and distillation to shrink models without losing accuracy. Mahesh also highlights practical applications of small language models in enterprise settings, from contract management to sentiment analysis, and discusses the challenges of deploying these models on edge devices. Tune in to discover cutting-edge strategies for model compression and adaptation, and how startups are leveraging base models with specialized adapters to revolutionize the AI landscape.

Learn more about the tinyML Foundation - tinyml.org

  continue reading

Capítulos

1. tinyML Talks - Cloud to the Edge: Future of LLMs w/ Mahesh Yadav of Google (00:00:00)

2. Edge AI Development and Challenges (00:00:37)

3. Edge AI With Small Language Models (00:13:43)

4. Advancements in Edge AI Training (00:22:53)

5. Techniques for Model Size Reduction (00:27:15)

6. Applications of Small Language Models (00:37:40)

7. Discussion on NVIDIA, ONNX, and Acceleration (00:41:05)

8. Model Compression and Adaptation Techniques (00:53:57)

6 episódios

Tất cả các tập

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências