Read On público
[search 0]
Mais
Download the App!
show episodes
 
Keeping you up to date with the latest trends and best performing architectures in this fast evolving field in computer science. Selecting papers by comparative results, citations and influence we educate you on the latest research. Consider supporting us on Patreon.com/PapersRead for feedback and ideas.
  continue reading
 
A weekly show all about audiobooks recorded at the RNIB Talking Book studios. We talk to your favourite authors and narrators, along with reviews and news about new audiobooks. Presented and produced by Robert Kirkwood, you'll find a new episode here every Friday at 1pm plus bonus content such as longer uncut interviews and episodes of our occasional extra show, The Book Group. Talking Books is a free service from RNIB giving access to over 40,000 fiction and non fiction books for adults and ...
  continue reading
 
Left On Read is a podcast where I'll be talking all my bookish thoughts. Let's hang 🖤 • (Yes, I did rebrand the cover art. I wanted something simpler and less busy.) • https://instagram.com/ashleyyyreads
  continue reading
 
Here we will be talking about love (in its simplicity and complication), sex of course, consent, sexuality, as we find them written in books. I am excited about this and I hope that you are a bit curious to see where these conversations take us.
  continue reading
 
Welcome back to “I read it online somewhere” - We are here every week to discuss what we have read online and answer your strange and wonderful questions you probably should have asked your science teachers at school. I am Amie and I didn’t ask these questions either, but luckily for me I’m joined by two science teachers Andrew and Ross who can help answer them NOW!Each week we will look at a science story we read online and we try to answer your questions. If you want to get in touch with y ...
  continue reading
 
Loading …
show series
 
Welcome to Episode 207. We are thrilled to have the opportunity to talk with author and rare book expert Rebecca Romney. Her first book was PRINTER’S ERRORS: Irreverent Stories from Book History, and she made an important contribution to the world of romance fiction with her collection, THE ROMANCE NOVEL IN ENGLISH: A Rare Book Survey, 1769-1999. Y…
  continue reading
 
Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight paramet…
  continue reading
 
Scorpus! Scorpus! Scorpus! Hannah and Laura are wrapping up the plot of Storm Front and find it hard to follow Harry Dresden's motivations....In other news, Hannah and Laura share about some recent graphic novel reads, a memoir that reflects on society's treatment of fat people, and they also have both started, but not finished, a really popular tv…
  continue reading
 
While many contemporary large language models (LLMs) can process lengthy input, they still struggle to fully utilize information within the long context, known as the lost-in-the-middle challenge. We hypothesize that it stems from insufficient explicit supervision during the long-context training, which fails to emphasize that any position in a lon…
  continue reading
 
In this report, we introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple improvements: (1) Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foun…
  continue reading
 
In the realm of mimicking human deliberation, large language models (LLMs) show promising performance, thereby amplifying the importance of this research area. Deliberation is influenced by both logic and personality. However, previous studies predominantly focused on the logic of LLMs, neglecting the exploration of personality aspects. In this wor…
  continue reading
 
The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and state space models exist, they empirically underperform Transformers in pretraining efficiency and downstream task accuracy. We introduce Megalodon, a neural architecture…
  continue reading
 
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a p…
  continue reading
 
Laura finally finished a book series and also *shockingly* adores another TV show based on a videogame. Hannah has been listening to lots of great, queer YA books, and while getting into her thoughts on a movie she watched recently she changed her opinion on it completely. Then Hannah and Laura immerse themselves into the middle of Harry Dresden's …
  continue reading
 
Large-scale recommendation systems are characterized by their reliance on high cardinality, heterogeneous features and the need to handle tens of billions of user actions on a daily basis. Despite being trained on huge volume of data with thousands of features, most Deep Learning Recommendation Models (DLRMs) in industry fail to scale with compute.…
  continue reading
 
Welcome to Episode 206 where we have a fantastic conversation with Rebecca Rego Barry, author of THE VANISHING OF CAROLYN WELLS: Investigations into a Forgotten Mystery Author. One reviewer referred to Barry’s book as a “process biography.” It is true, Barry takes you along on her investigation into the life of Carolyn Wells who, it turns out, wrot…
  continue reading
 
It's TV Tuesday, so Hannah and Laura decided to hop in the Tardis and cover the first series of Doctor Who! They chat about the actors, favorite episodes, themes, and of course, the Daleks. ***This podcast episode contains SPOILERS for Doctor Who series 1.*** Media Mentions: Doctor Who series 1---Max The Lord of the Rings movies---Max Community---N…
  continue reading
 
We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages. This underexplored problem poses new challenges at the pre-writing stage, including how to research the topic and prepare an outline prior to writing. We propose STORM, a writing system f…
  continue reading
 
In this work, we introduce Mini-Gemini, a simple and effective framework enhancing multi-modality Vision Language Models (VLMs). Despite the advancements in VLMs facilitating basic visual dialog and reasoning, a performance gap persists compared to advanced models like GPT-4 and Gemini. We try to narrow the gap by mining the potential of VLMs for b…
  continue reading
 
We present InstantMesh, a feed-forward framework for instant 3D mesh generation from a single image, featuring state-of-the-art generation quality and significant training scalability. By synergizing the strengths of an off-the-shelf multiview diffusion model and a sparse-view reconstruction model based on the LRM architecture, InstantMesh is able …
  continue reading
 
The Year of the Dresden is finally here! Hannah and Laura are covering the first third of Jim Butcher's Storm Front and friends, this book is a romp. Laura also makes a bold claim about television shows that she watched growing up and gushes about a new favorite book. Hannah has delved into a lot of recommendations that were given to her by Laura a…
  continue reading
 
We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance…
  continue reading
 
Researchers have made significant progress in automating the software development process in the past decades. Automated techniques for issue summarization, bug reproduction, fault localization, and program repair have been built to ease the workload of developers. Recent progress in Large Language Models (LLMs) has significantly impacted the devel…
  continue reading
 
Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring the trustworthiness of LLMs emerges as an important topic. This paper introduces Tru…
  continue reading
 
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequently, we employ a robust diffusio…
  continue reading
 
Loading …

Guia rápido de referências