Artwork

Conteúdo fornecido por Massive Studios. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Massive Studios ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.
Player FM - Aplicativo de podcast
Fique off-line com o app Player FM !

Sizing AI Workloads

33:34
 
Compartilhar
 

Manage episode 414210643 series 2285741
Conteúdo fornecido por Massive Studios. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Massive Studios ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

John Yue (CEO & Co-Founder @ inference.ai) discusses AI workload sizing, matching GPUs to workloads, availability of GPUs vs. costs, and more.
SHOW: 815
CLOUD NEWS OF THE WEEK -
http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST -
"CLOUDCAST BASICS"
SHOW NOTES:

Topic 1 - Our topic for today is sizing and IaaS hosting for AI/ML. We’ve covered a lot of basics lately, today we’re going to dig deeper. There is a surprising amount of depth to AI sizing, and it isn’t just speeds and feeds of GPUs. We’d like to welcome John Yue (CEO & Co-Founder @ inference.ai) for this discussion. John, welcome to the show
Topic 2 - Let’s start with sizing, I’ve talked to a lot of customers recently with my day job, and it is amazing how deep AI/ML sizing can go. First, you have to size for training/fine-tuning differently than you would for the inference stage. Second, some just think, pick the biggest GPUs you can afford and go. How should your customers approach this? (GPU’s, software dependencies, etc.)
Topic 2a - Follow-up question what are the business side, what are the business parameters that need to be considered? (budget, cost efficiency, latency/response time, timeline, etc.)
Topic 3 - The whole process can be overwhelming and as we mentioned, some organizations may not think of everything. You recently announced a chatbot to help with this exact process, ChatGPU. Tell everyone a bit about that and how it came to be.
Topic 4 - This is almost like a match-making service, correct? Everyone wants an H100, but not everyone needs or can afford an H100.
Topic 5 - How does GPU availability play into all of this? NVIDIA is sold out for something like 2 years at this point; how is that sustainable? Does everything need to run on a “Ferrari class” NVIDIA GPU?
Topic 6 - What’s next in the IaaS for AI/ML space? What does a next-generation data center for AI/ML look like? Will the Industry move away from GPUs to reduce dependence on NVIDIA?
FEEDBACK?

  continue reading

874 episódios

Artwork

Sizing AI Workloads

The Cloudcast

1,283 subscribers

published

iconCompartilhar
 
Manage episode 414210643 series 2285741
Conteúdo fornecido por Massive Studios. Todo o conteúdo do podcast, incluindo episódios, gráficos e descrições de podcast, é carregado e fornecido diretamente por Massive Studios ou por seu parceiro de plataforma de podcast. Se você acredita que alguém está usando seu trabalho protegido por direitos autorais sem sua permissão, siga o processo descrito aqui https://pt.player.fm/legal.

John Yue (CEO & Co-Founder @ inference.ai) discusses AI workload sizing, matching GPUs to workloads, availability of GPUs vs. costs, and more.
SHOW: 815
CLOUD NEWS OF THE WEEK -
http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST -
"CLOUDCAST BASICS"
SHOW NOTES:

Topic 1 - Our topic for today is sizing and IaaS hosting for AI/ML. We’ve covered a lot of basics lately, today we’re going to dig deeper. There is a surprising amount of depth to AI sizing, and it isn’t just speeds and feeds of GPUs. We’d like to welcome John Yue (CEO & Co-Founder @ inference.ai) for this discussion. John, welcome to the show
Topic 2 - Let’s start with sizing, I’ve talked to a lot of customers recently with my day job, and it is amazing how deep AI/ML sizing can go. First, you have to size for training/fine-tuning differently than you would for the inference stage. Second, some just think, pick the biggest GPUs you can afford and go. How should your customers approach this? (GPU’s, software dependencies, etc.)
Topic 2a - Follow-up question what are the business side, what are the business parameters that need to be considered? (budget, cost efficiency, latency/response time, timeline, etc.)
Topic 3 - The whole process can be overwhelming and as we mentioned, some organizations may not think of everything. You recently announced a chatbot to help with this exact process, ChatGPU. Tell everyone a bit about that and how it came to be.
Topic 4 - This is almost like a match-making service, correct? Everyone wants an H100, but not everyone needs or can afford an H100.
Topic 5 - How does GPU availability play into all of this? NVIDIA is sold out for something like 2 years at this point; how is that sustainable? Does everything need to run on a “Ferrari class” NVIDIA GPU?
Topic 6 - What’s next in the IaaS for AI/ML space? What does a next-generation data center for AI/ML look like? Will the Industry move away from GPUs to reduce dependence on NVIDIA?
FEEDBACK?

  continue reading

874 episódios

All episodes

×
 
Loading …

Bem vindo ao Player FM!

O Player FM procura na web por podcasts de alta qualidade para você curtir agora mesmo. É o melhor app de podcast e funciona no Android, iPhone e web. Inscreva-se para sincronizar as assinaturas entre os dispositivos.

 

Guia rápido de referências