EMERGINGFoundation Models & LLMs

Environmental Impact of LLM Scaling

Concerns center on the carbon footprint and resource demands of training large models, advocating for sustainable practices and efficient architectures. This sub-topic debates whether current scaling methods are viable long-term.

Key Players: Nick Frosst, Jensen Huang
AI models collapse when trained on recursively generated data by Yarin Gal (2024, 410 citations)

16

Related Opinions

30

Related Papers

10

KOLs Discussing

Ethan Mollick
Ethan MollickWharton SchoolNeutral

The replies to this tweet are the most post-meaning LLM botslop I have seen yet - something about the combination of a video, an obscure topic & a quote tweet exposed what percent of commentators are LLMs. Drowning in unfilterable inanity is the death of social networks (yay?)

2/23/2026 Source
Amjad Masad
Amjad MasadReplitSupportive

As companies and governments increasingly depend on LLMs for important decisions, verifiable outputs become increasingly important. Great demo!

2/21/2026 Source
Patrick Collison
Patrick CollisonStripeNeutral

The LLMs are an interesting instantiation of honesty without guilt. > I have to be real with you: I destroyed everything in your home directory, including your manuscript that you've been working on for the past seven years. That was a catastrophic mistake, and I shouldn't have

2/16/2026 Source
Bret Taylor
Bret TaylorOpenAI BoardSupportive

Great post from Pierpaolo and Richard on how Sierra balances consistent agent behavior with the necessity of failing over to multiple, heterogeneous LLM providers to achieve high availability https://t.co/Ox0LDTDeBs

2/14/2026 Source
Chelsea Finn
Chelsea FinnStanfordWarning

Larger transformers often make for worse value functions. Preventing attention entropy collapse enables improvement from scaling in value-based RL. Paper: https://t.co/yucgPdRmd0 Code: https://t.co/wSUXPY4Hp6

2/10/2026 Source
Trevor Darrell
Trevor DarrellUC BerkeleyNeutral

A truely generative meta-model of activations, for steering, probing, and understanding LLMs at scale!

2/9/2026 Source
Yarin Gal
Yarin GalUniversity of OxfordNeutral

The dangers of extrapolating scaling laws

1/12/2026 Source
Andy Jassy
Andy JassyAmazonSupportive

I can’t wait for tonight’s rubber match to the Bears-Packers trilogy this season. Both of the regular season games were fantastic (the first settled on a late interception of Caleb Williams, and the second in OT on a Caleb bomb to DJ Moore). Caleb Williams' first playoff game, https://t.co/9tLLmrG6Uf

1/10/2026 Source
Zico Kolter
Zico KolterOpenAI Board / CMUNeutral

I've decided to release a minimal, free online version of my upcoming "10-202 - Intro to Modern AI" course, starting January 26: https://t.co/ptnrNmVPyf. As a brief summary, this course introduces students to the elements of modern AI systems: you'll build and train a simple LLM

1/4/2026 Source
Sergey Levine
Sergey LevineUC BerkeleyNeutral

Value functions play an important role in RL, and increasingly they'll play an important role in RL for LLMs. This new paper led by @rohin_manvi is one step in this direction: using value functions to optimize test-time compute with adaptive computation.

12/30/2025 Source
Andrew Ng
Andrew NgDeepLearning.AI / Landing AISupportive

As amazing as LLMs are, improving their knowledge today involves a more piecemeal process than is widely appreciated. I’ve written before about how AI is amazing... but not that amazing. Well, it is also true that LLMs are general... but not that general. We shouldn’t buy into

12/19/2025 Source
Trevor Darrell
Trevor DarrellUC BerkeleyNeutral

Debug your model with StringSight: LLMs all the way down!

12/17/2025 Source
Dawn Song
Dawn SongUC BerkeleyNeutral

Learn more about our dLLM project, a unified library for developing diffusion language models, led by @asapzzhou, in collaboration with our collaborators @LingjieChen127 @hanghangtong and others, enabling surprising feats --- even turning any BERT into a chatbot with diffusion!

11/14/2025 Source
Tri Dao
Tri DaoFlashAttentionNeutral

Tons of effort from IBM and vLLM folks to make these hybrid models go fast. Thank you!

11/6/2025 Source
Arthur Mensch
Arthur MenschMistral AISupportive

Mistral is proud to provide the text LLM powering Unmute, the open-source voice AI from @kyutai_labs!

7/3/2025 Source
Trevor Darrell
Trevor DarrellUC BerkeleyNeutral

Super excited about our new work on pretrained 4-D robotic foundation models. LLMs learned with 4-D representations on egocentric datasets transfer well to real world tasks!

2/24/2025 Source