Start with the resources marked by an asterisk *
Textbook:
Deep Learning with Python, Third Edition by Francois Chollet and Matthew Watson (Ch 1 through 4) *
Deep Learning with Python, Second Edition by Francois Chollet
Basics of Neural Networks
Key Research Papers
Review papers
Foundational papers
“A Logical Calculus of the Ideas Immanent in Nervous Activity” McCulloch and Pitts (1943)
“Learning representations by back-propagating errors” by Rumelhart et al (1986) *
“Backpropagation applied to handwritten zip code recognition” by LeCun et al (1989) *
“Gradient-based learning applied to document recognition” by LeCun et al (1998)
“Deep Sparse Rectifier Neural Networks” by Glorot et al (2011) [ReLU] *
“Attention Is All You Need” by Vaswani et al (2017) [Transformers]
Papers showing representational similarity between NNs and the brain
ANNs’ value for neuroscience
Brain-inspired learning models
AlphaGo / AlphaFold / IMO Gold
AlphaFold
"Highly accurate protein structure prediction with AlphaFold" by Jumper at al (2021) [paper]
"AlphaFold: a solution to a 50-year-old grand challenge in biology" by the AlphaFold team (2021) [blog post]
"‘The game has changed.' AI triumphs at solving protein structures" by Service (2021) [article]
“AlphaFold2 and its applications in the fields of biology and medicine” by Zhang et al (2023) [paper]
IMO Gold
Alexander Wei announcement [tweets]
Discussion with team [video]
“Stochastic parrots” and “just statistics” argument
“The False Promise of ChatGPT” by Chomsky (2023) [NYT opinion]
Emily Bender’s Lunch with the FT
Melanie Mitchell: “Why AI is Harder Than We Think”
Other