Probably actually reading/writing
March 5, 2020 — May 30, 2024
Stuff that I am currently actively reading or otherwise working on. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.
1 Triage
2 Notes
I need to reclassify the bio computing links; that section has become confusing and there are too many nice ideas there not clearly distinguished.
3 Currently writing
Not all published yet, expect broken links.
community building
Is academic literary studies actually distinct from the security discipline of studying side-channel attacks?
Is residual prediction different from adversarial prediction?
-
- Movement design
- Returns on hierarchy
- Effective collectivism
- Alignment
- Emancipating my tribe, the cruelty of collectivism (and why I love it anyway)
- Institutions for angels
- Institutional alignment
- Beliefs and rituals of tribes
- Where to deploy taboo
- The Great Society will never feel great, merely be better than the alternatives
- Egregores etc
- Player versus game
Something about the fungibility of hipness and cash- Monastic traditions
What even are GFlownets?
how to do house stuff (renovation etc)
Power and inscrutability
strategic ignorance
What is an energy based model?? tl;dr branding for models that handle likelihoods through a potential function which is not normalised to be a density. I do not think there is anything new about that per se?
Funny-shaped learning
- Causal attention
- graphical ML
- gradient message passing
- All inference is already variational inference
Human learner series
- Our moral wetware
- Something about universal grammar and its learnable local approximations, versus universal ethics and its learnable local approximations. Morality by template, computational difficulty of moral identification. Leading by example of necessity.
- Burkean conservatism is about unpacking when moral training data is out-of-distribution
- Morality under uncertainty and computational constraint
- Superstimuli
- Clickbait bandits
- correlation construction
- Moral explainability
- righting and wronging
- Akrasia in stochastic processes: What time-integrated happiness should we optimise?
Comfort traps✅ Good enough for nowMyths✅ a few notes is enough
Classification and society series
- Affirming the consequent and evaporative tribalism.
- Classifications are not very informative
- Adversarial categorization
- AUC and collateral damage
- bias and base rates
- Decision theory
- decision theory and prejudice
Shouting at each other on the internet series (Teleological liberalism)
- Modern politics seems to be excellent at reducing the vast spectrum of policy space to two mediocre choices then arguing about which one is worse. What is this tendency called?
- The Activist and decoupling games, and game-changing
- on being a good weak learner
- lived evidence deductions and/or ad hominem for discussing genetic arguments.
- diffusion of responsibility — is this distinct from messenger shooting?
- Iterative game theory of communication styles
- Invasive arguments
- Coalition games
- All We Need Is Hate
- Speech standards
- Player versus game
- Startup justice warriors/move fast and cancel things
Pluralism✅
Learning in context
- Interaction effects are what we want
- Interpolation is what we want
- Optimal conditioning is what we want
- Correlation construction is easier than causation learning
Epistemic community design
- Scientific community
- Messenger shooting
- on being a good weak learner
- Experimental ethics and surveillance
- Steps to an ecology of mind
- Epistemic bottlenecks is probably in this series too.
- Ensemble strategies at the population level. I don’t need to guess right, we need a society in which people in aggregate guess in a calibrated way.
Epistemic bottlenecks and bandwidth problems
- Information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
- What is special about science? One thing is transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainability relate to transmissibility?
Tail risks and epistemic uncertainty
economic dematerialization via
- Enclosing the intellectual commons
- creative economy jobs
Academic publications as Veblen goods
X is Yer than Z
Haunting and exchangeability. Connection to interpolation, and individuation, and to legibility, and nonparametrics.
Something about the limits of legible fairness versus metis in common property regimes
The uncanny ally
Strategic ignorance
anthropic principles✅ Good enoughYou can’t talk about us without us❌ what did I even mean? something about mottes and baileys?subculture dynamics✅ Good enoughOpinion dynamics (memetics for beginners)✅ Good enoughIterative game theory under bounded rationality❌ too generalMemetics❌ (too big, will never finish)Cradlesnatch calculator✅ Good enoughSingularity lite, the orderly retreat from relevance
4 music stuff
5 Misc
- Transforming Probability Spaces
- Does not CGD find a pursuit basis?
6 Workflow optimization
7 graphical models
- Kernel embedding of distributions on Wikipedia
- versus autodiff: There and Back Again: A Tale of Slopes and Expectations | Mathematics for Machine Learning
- Zhoubin
- Montanari
- Machine Learning — Graphical Model Exact inference (Variable elimination, Belief propagation, Junction tree) | by Jonathan Hui
- scribe_note_lecture13.pdf
- Belief propagation -
- gss2013_11344.pdf
8 “transfer” learning
- Bernhard Schölkopf: From statistical to causal learning
- Bernhard Schölkopf: Learning Causal Mechanisms (ICLR invited talk)
- thuml/Transfer-Learning-Library: Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
- Transfer Learning — Transfer Learning Library 0.0.24 documentation* thuml/A-Roadmap-for-Transfer-Learning
9 Custom diffusion
- GitHub - PRIV-Creation/Awesome-Diffusion-Personalization: A collection of resources on personalization with diffusion models.
- GitHub - PRIV-Creation/UniDiffusion: A Diffusion training toolbox based on diffusers and existing SOTA methods, including Dreambooth, Texual Inversion, LoRA, Custom Diffusion, XTI, ….
10 Commoncog
- Start Here: Commoncog’s Best Posts - Commoncog
- The 2022 Commoncog Recap - Commoncog
- Setting the Business Expertise Series Free - Commoncog
- Reading (non-fiction) books for present-self vs future-self - Self Improvement Inputs - The Commonplace Community
- On Moving Fast and How to Move Faster - Self Improvement Inputs - The Commonplace Community
- The Commonplace Community
- Top topics - The Commonplace Community
- Commoncog - Commoncog
11 Music skills
12 Internal
13 ICML 2023 workshop
- Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators | ICML 2023 Workshop, Honolulu, Hawaii — ICML 2023
- Structured Probabilistic Inference & Generative Modeling @ ICML — ICML 2023
- Duality Principles for Modern ML — ICML 2023
- Synergy of Scientific and Machine Learning Modeling — ICML 2023
14 Neurips 2022 follow-ups
- The Symbiosis of Deep Learning and Differential Equations (DLDE)
- NeurIPS 2022 Workshop DLDE | OpenReview
- Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
- Rudner et al. (2022)
- Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
- Gahungu et al. (2022)
- Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
- Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
- Neural density estimation
- Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
- Noise injection in emulator learning (see refs in Su et al. (2022))
15 Conf, publication venues
16 Neurips 2022
- NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems
- SBM @ NeurIPS
- Causal dynamics
- The Symbiosis of Deep Learning and Differential Equations (DLDE)
- Machine Learning and the Physical Sciences, NeurIPS 2022
- AI for Science: Progress and Promises
- Machine Learning Street Talk
17 Neurips 2021
- Storchastic: A Framework for General Stochastic Automatic Differentiation
- Causal Inference & Machine Learning: Why now?
- Real-Time Optimization for Fast and Complex Control Systems
- [2104.13478] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
- Cheng Soon Ong, Marc Peter Deisenroth | There and Back Again: A Tale of Slopes and Expectations (“Lets unify automatic differentiation, integration and mesage passing”
- David Duvenaud, J. Zico Kolter, Matt Johnson | Deep Implicit Layers: Neural ODEs, Equilibrium Models and Beyond
18 Music
Nestup / cutelabnyc/nested-tuplets: Fancy javascript for manipulating nested tuplets.
19 Hot topics
- Beyond Message Passing: a Physics-Inspired Paradigm for Graph Neural Networks
- Wee-Sun Lee on GNNs
- How GNNs and Symmetries can help to solve PDEs - Max Welling
- Equations of Motion from a Time Series
- Path Integrals and Feynman Diagrams for Classical Stochastic Processes
- Inference for Stochastic Differential Equations
- George Ho, Modern Computational Methods for Bayesian Inference: A Reading List is a good curation of modern Bayes methods posts. The next links come from there
- will wolf on neural methods in Simulation-Based Inference
- will wolf, Deriving Expectation-Maximization
- will wolf, Deriving Mean-Field Variational Bayes
- Reality Is Just a Game Now
- Michael Bronstein, Graph Neural Networks as gradient flows, re: [2206.10991] Graph Neural Networks as Gradient Flows: understanding graph convolutions via energy
- M Bronstein’s ICLR 2021 Keynote, Geometric Deep Learning: The Erlangen Programme of ML
- How to write a great research paper
- The Notion of “Double Descent”
- Jaan on translating between variational terminology in physics and ML
- Sander on waveform audio
- yuge shi’s ELBO gradient post is excellent
- Francis Bach, the many faces of integration by parts.
- Bubeck on hot results in learning theory takes him far from the world of mirror descent, where i first met him. Also lectures well, IMO.
- Causality for Machine Learning
20 Stein stuff
22 GP research
- https://www.patreon.com/posts/new-linearized-69325387
- Regression-based covariance functions for nonstationary spatial modeling
- kalman-jax/sde_gp.py at master · AaltoML/kalman-jax
- AaltoML/kalman-jax: Approximate inference for Markov Gaussian processes using iterated Kalman smoothing, in JAX
22.1 Invenia’s GP expansion ideas
- Gaussian Processes: from one to many outputs
- Implementing a scalable multi-output GP model with exact inference
- Scaling multi-output Gaussian process models with exact inference
- wesselb/stheno: Gaussian process modelling in Python
- Linear Models from a Gaussian Process Point of View with Stheno and JAX
23 SDEs, optimization and gradient flows
Nguyen and Malinsky (2020)
Statistical Inference via Convex Optimization.
Conjugate functions illustrated.
Francis Bach on the use of geometric sums and a different take by Julyan Arbel.
Tutorial to approximating differentiable control problems. An extension of this is universal differential equations.
24 Career tips and metalearning
jbhuang0604/awesome-tips* Making the right moves: A Practical Guide to Scientific Management for Postdocs and New Faculty
There is a Q&A site about this, Academia stackexchange
For early career types, classic blog Thesis Whisperer
Read Academic work-life-balance survey to feel like not bothering with academe.
AI research: the unreasonably narrow path and how not to be miserable
How to Become the Best in the World at Something
This is how skill stacking works. It’s easier and more effective to be in the top 10% in several different skills — your “stack” — than it is to be in the top 1% in any one skill.
25 Ensembles and particle methods
26 Foundations of ML
So much Michael Betancourt.
- Probability Theory (For Scientists and Engineers)
- Course Notes 7: Gaussian Process Engineering | Michael Betancourt on Patreon
- Conditional Probability Theory (For Scientists and Engineers)
- Autodiff for Implicit Functions Paper Live Stream Wed 1/12 at 11 AM EST | Michael Betancourt on Patreon
- New Autodiff Paper | Michael Betancourt on Patreon
- Rumble in the Ensemble
- Scholastic Differential Equations | Michael Betancourt on Patreon
- Identity Crisis
- Invited Talk: Michael Bronstein
- Product Placement
- (Not So) Free Samples
- Updated Geometric Optimization Paper
- We Built Sparse City
- Rumble in the Ensemble