Miles Cranmer - The Next Great Scientific Theory is Hiding Inside a Neural Network (April 3, 2024)

184,145
0
Published 2024-04-05
Machine learning methods such as neural networks are quickly finding uses in everything from text generation to construction cranes. Excitingly, those same tools also promise a new paradigm for scientific discovery.

In this Presidential Lecture, Miles Cranmer will outline an innovative approach that leverages neural networks in the scientific process. Rather than directly modeling data, the approach interprets neural networks trained using the data. Through training, the neural networks can capture the physics underlying the system being studied. By extracting what the neural networks have learned, scientists can improve their theories. He will also discuss the Polymathic AI initiative, a collaboration between researchers at the Flatiron Institute and scientists around the world. Polymathic AI is designed to spur scientific discovery using similar technology to that powering ChatGPT. Using Polymathic AI, scientists will be able to model a broad range of physical systems across different scales. More details: www.simonsfoundation.org/event/the-next-great-scie…

All Comments (21)
  • @Bartskol
    So here we are, you guys seems to be chosen by algorithm for us to meet here. Welcome, for some reason.
  • @antonkot6250
    It seems like very powerful idea, when AI observes the system, then learns to predict behaviour and then the rules of this predictions are used to delivery math statement. Wish the authors the best luck
  • @laalbujhakkar
    I came here to read all the insane comments, and I’m not disappointed.
  • It makes intuitive sense that a cat video is better initialization than noise. It's a real measurement of the physical world
  • @cziffras9114
    It is precisely what I'm working on for some time now, very well explained in this presentation, nice work! (the idea of pySR is outrageously elegant, I absolutely love it!)
  • Jesus christ, okay Youtube I will watch this video now stop putting it in my recommendations every damn time
  • @jim37569
    Love the definition of simplicity, I found that to be pretty insightful.
  • @tehdii
    I am re-reading once again the book By David Foster Wallace History of Infinity. There he describes the book by Bacon Novum Organum. In book one there is an apt statement that I would like to paste 8. Even the effects already discovered are due to chance and experiment, rather than to the sciences. For our present sciences are nothing more than peculiar arrangements of matters already discovered, and not methods for discovery, or plans for new operations.
  • @AVCD44
    What an amazing fck of presentation. I mean, of course the subject and research is absolutely mind-blowing, but the presentation in itself is soooo crystal clear, I will surely aim for this kind of distilled communication, thank you!!
  • The folding analogy looks a lot like convolution. Also, the piecewise continuous construction of functions is used extensively in waveform composition in circuit analysis applications, though the notation is different, using multiplication by the unit step function u(t).
  • I was wondering or missing the concept of Meta-Learning with transformers, especially because most of these physics simulations shown are quite low-dimensional. Put a ton of physics equations into a unifying language format, treat each problem as a gradient step of a transformer, and predict on new problems. In this way, your transformer has learned on other physics problems, and infers maybe the equation/solution to your problem right away. The difference to pre-training is that these tasks or problems are shown each at a time unlike the entire distribution without specification. There has been work to this on causal graphs, and low-dimensional image data of mnist, where the token size is the limitational factor of this approach, I believe.
  • @donald-parker
    Being able to derive gravity laws from raw data is a cool example. How sensitive is this process to bad data? For example, non-unique samples, imprecise measurements, missing data (poor choice of sample space), irrelevant data, biased data, etc). I would expect any attempt to derive new theories from raw data to have this sort of problem in spades.
  • There are multiple different awesome ideas in this presentations. For example, an idea of having a neural net discovering new physics, or simply of being the better scientist than a human scientist. Such neural nets are on the verge of discovery or maybe in use right now. But I think the symbolic distillation in the multidimensional space is the most intriguing to me and a subject that was worked on as long as the neural networks were here. Using a genetic algorithm but also maybe another (maybe bigger?) neural network is needed for such a symbolic distillation. In a way, yes, the distillation is needed to speed up the inference process, but I can also imagine that the future AI (past the singularity) will not be using symbolic distillation. Simply, it will just create a better single model of reality in its network and such model will be enough to understand the reality around and to make (future) prediction of the behavior of the reality around.
  • @chrisholder3428
    For anyone that does not do work with ML, the takeaway of symbolic regression as a means of model simplification may seem quite powerful at first, but often our rational to justify neural net usage is precisely due to the difficulty in the derivation of explainable analytical expressions to phenomena. People like Stephen Wolfram suggest that perhaps this assumption of assuming complex phenomena can be model analytically is precisely why we are having problems advancing. The title of the video to seasoned ML researchers sounds like the speaker will be explaining techniques to analyze neural net weights instead of talking about this.
  • This is SO cool! My first thought was just having incredible speed once the neural net is simplified down. For systems that are heavily used, this is so important
  • @comosaycomosah
    been in the rabbit hole lately so glad this popped up you rock miles!
  • @ryam4632
    This is a very nice idea. I hope it will work! It will be very interesting to see new analytical expressions coming out of complicated phenomena.