Advancing AI through cognitive science - Spring 2019
Instructor: Brenden Lake
Assistant Professor of Psychology and Data Science
New York University
Meeting time and location:
Thursday 4-5:50 PM
Meyer Room 465 (4 Washington Place)
Course numbers:
PSYCH-GA 3405.001 (Psychology)
DS-GA 3001.013 (Data Science)
Office hours:
Wednesdays 10-11:00 am, or by appointment; 60 5th Ave., Room 610
Summary: Why are people smarter than machines? This course explores how the study of human intelligence can inform and improve artificial intelligence. We will look to cognitive science, with special focus on cognitive development, to help elucidate a set of “key ingredients” that are important components of human learning and thought, but are either underutilized or absent in contemporary artificial intelligence. Through readings and discussion, we will cover ingredients such as “intuitive physics,” “intuitive psychology,” “compositionality,” “causality,” and “learning-to-learn,” although students will be encouraged to contribute other ingredients. Each ingredient will be discussed and compared from the perspectives of both cognitive science and AI, with readings drawn from both fields with roughly a 50/50 proportion.
This is a small discussion-based seminar, so please come ready to participate in the discussion. Please note that this syllabus is not final and there may be further adjustments.
Pre-requisites
- This course is intended for graduate students in cognitive science or graduate students in data science / AI.
- Students are not expected to have a background in both cognitive science and AI. Instead, students may have experience in one field and the desire to learn about the other. Ideally, at the end of the course, students will have a deeper appreciation of contemporary issues in both fields and their potential for synergy.
- At minimum, it would be very good to have taken a graduate level psychology course, OR a graduate level machine learning / AI course. If you have taken neither, this probably not the right course for you.
- Programming is not a requirement for this course, although students may choose to incorporate programming in their final project.
Grading
The final grade is based on the final paper or project (50%), written reactions to the reading (25%), and participating in discussions (25%).
The final paper or project is done individually. For the final assignment, students may either write a final paper that proposes an additional ingredient of human intelligence that is underutilized in AI, or complete a project that implements one of the ingredients discussed in an algorithm. The final assignment proposal is due on Thursday, April 4 (one half page written). The final assignment is due on Tuesday, May 14.
What makes a good reaction post? There are many ways to write a good reaction post, and I would rather leave it up to you than impose a particular formula. Try to articulate an opinion about the readings, rather than write an exhaustive summary of the articles. I prefer to read your opinions than a summary! It’s great if you can put the articles in conversation with each other and with the theme of the class. It’s also okay to focus on one or two of the readings that interest you most, rather than talking about each of the readings equally.
The responses should be short – three short paragraphs is about right. I don’t want the reaction to take much of your time beyond reading the articles themselves (15 mins is reasonable for writing your response).
Course discussion
We will be using Piazza for reactions to readings and class discussion.
The signup link for our Piazza page is available here (https://piazza.com/nyu/spring2019/psychga3405001).
Once signed up, our class Piazza page is available here (https://piazza.com/nyu/spring2019/psychga3405001/home).
Final assignment
- The final assignment is due Tuesday, May 14.
- You will also be asked to give a 5 minute presentation on your final project on Thursday May, 9.
- The final paper or project is done individually (not as a group).
- Option 1: A final paper that proposes an additional ingredient of human intelligence that is underutilized in AI. The paper should summarize the psychological literature on the ingredient, and discuss the relevant AI literature or lack thereof (about 8 pages)
- Option 2: Complete a project that implements an important aspect of one of the ingredients discussed in class (intuitive physics, intuitive psychology, compositionality etc.) in an algorithm (with a 4 page writeup)
- If you can link the project to your research, that’s encouraged!
- The final assignment proposal is due on Thursday, April 4 (one half page written). Submit via email with the file name lastname-aai-proposal.pdf (brenden@nyu.edu).
- Please submit final assignment via email (brenden@nyu.edu) with the file name lastname-aai-final.pdf
Course policies
Auditing: Please email instructor to see if there are available seats. Priority goes to registered students and then by date of audit request.
Overview of topics and schedule
- 1/31 Introduction and overview
- 2/7 Deep learning – Lecture
- 2/14 Deep learning - Discussion
- 2/21 Intuitive physics (part 1: humans)
- 2/28 Intuitive physics (part 2: machines)
- 3/7 Intuitive psychology (part 1: humans)
- 3/14 Intuitive psychology (part 2: machines)
- 3/21 NO CLASS. Spring Recess
- 3/28 Compositionality
- 4/4 Causality
- Final assignment proposal due (Thursday 4/4)
- 4/11 Learning-to-learn
- 4/18 Critiques of “Building machines that learn and think like people”
- 4/25 Language and Culture
- 5/2 Emotion and Egocentric learning
- 5/9 Final assignment presentations
- Final assignment due (Tuesday 5/14)
Detailed schedule and readings
Please see below for the assigned readings for each class (to be read before class). Before each class, students will be asked to submit a reaction to the readings (three paragraphs). Reaction posts are submitted via Piazza. Papers are available for download on NYU Classes in the “Resources” folder. Reactions are due by midnight the day before class so that I have time to read the reactions.
1/31 Introduction and overview
2/7 Deep learning – Lecture
2/14 Deep learning - Discussion
- Lake, B. M., Ullman, T. D., Tenenbaum, J. B., Gerhsman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences.
Only Sections 1-3 (pgs. 1-9)
- Mnih, V., Kavukcuoglu, K., Silver, D., …. & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature 518(7540):529–33.
- Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., … & Badia, A. P. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626), 471-476.
- Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning (pp. 2048-2057).
- Reaction post is requried for this class and the following classes (due midnight the night before class)
2/21 Intuitive physics (part 1: humans)
- Building machines that learn and think like people (Section 4 through 4.1, pg. 9-11)
- Spelke, E. S. (1990). Principles of object perception. Cognitive Science 14(1):29–56.
- Xu, F., & Carey, S. (1996). Infants’ metaphysics: The case of numerical identity. Cognitive psychology, 30(2), 111-153.
- Battaglia, P. W., Hamrick, J. B. & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences 110(45):18327–32.
2/28 Intuitive physics (part 2: machines)
- Lerer, A., Gross, S. & Fergus, R. (2016). Learning physical intuition of block towers by example. Presented at the 33rd International Conference on Machine Learning (ICML).
- Battaglia, P., Pascanu, R., Lai, M. & Rezende, D. J. (2016). Interaction networks for learning about objects, relations and physics. Advances in Neural Information Processing Systems.
- Mottaghi, R., Bagherinezhad, H., Rastegari, M., & Farhadi, A. (2016). Newtonian scene understanding: Unfolding the dynamics of objects in static images. Computer Vision and Pattern Recognition (pp. 3521-3529).
3/7 Intuitive psychology (part 1: humans)
- Building machines that learn and think like people (Section 4.1.2, pg. 11-2)
- Woodward, A. L. (1998). Infants selectively encode the goal object of an actor’s reach. Cognition, 69(1), 1-34.
- Csibra, G., Biro, S., Koos, O. & Gergely, G. (2003). One-year-old infants use teleological representations of actions productively. Cognitive Science 27:111–33
- Baker, C. L., Jara-Ettinger, J., Saxe, R. & Tenenbaum, J. B. (2017). Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. Nature Human Behaviour.
3/14 Intuitive psychology (part 2: machines)
- Raileanu, R., Denton, E., Szlam, A., and Fergus, R. (2018). Modeling Others using Oneself in Multi-Agent Reinforcement Learning. Proceedings of the 35th International Conference on Machine Learning (ICML).
- Rabinowitz, N. C., Perbet, F., Song, H. F., Eslami, S. M. A., Botvinick, M. (2018). Machine theory of mind. Proceedings of the 35th International Conference on Machine Learning (ICML).
3/21 NO CLASS. Spring Recess
3/28 Compositionality
- Building machines that learn and think like people (Section 4.2-4.2.1, pg. 12-15)
- Marcus, G. (1998) Rethinking eliminative connectionism. Cognitive Psychology 282 (37):243–82.
- Lake, B. M., Linzen, T., and Baroni, M. (2019). Human few-shot learning of compositional instructions. Preprint available on arXiv:1901.04587.
- Reed, S., & De Freitas, N. (2016). Neural programmer-interpreters. International Conference on Learning Representations (ICLR).
4/4 Causality
- Building machines that learn and think like people (Section 4.2.2, pg. 15-16)
- Murphy, G. L. & Medin, D. L. (1985) The role of theories in conceptual coherence. Psychological Review 92(3):289–316.
- Lake, B. M., Salakhutdinov, R. & Tenenbaum, J. B. (2015) Human-level concept learning through probabilistic program induction. Science 350(6266):1332–38.
- Hewitt, L. B., Nye, M. I., Gane, A., Jaakkola, T., & Tenenbaum, J. B. (2018). The Variational Homoencoder: Learning to learn high capacity generative models from few examples. Uncertainty in Artificial Intelligence (UAI).
4/11 Learning-to-learn
- Building machines that learn and think like people (Section 4.2.3-4.3, pg. 16-19)
- Smith, L. B., Jones, S. S., Landau, B., Gershkoff-Stowe, L. & Samuelson, L. (2002) Object name learning provides on-the-job training for attention. Psychological Science 13(1):13–19.
- Ritter, S., Barrett, D. G., Santoro, A., & Botvinick, M. M. (2017). Cognitive psychology for deep neural networks: A shape bias case study. International Conference on Machine Learning (ICML).
- Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NIPS).
- Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., … & Botvinick, M. (2016). Learning to reinforcement learn. arXiv preprint arXiv:1611.05763.
4/18 Critiques of “Building machines that learn and think like people”
- Building machines that learn and think like people (Section 5-end, pg. 19-25)
- Commentaries to read:
- Botvinick et al., “Building machines that learn and think for themselves”
- Caglar and Hanson, “Back to the future: The return of cognitive functionalism”
- Chater and Oaksford, “Theories or fragments?”
- Clegg and Corriveu, “Children begin with the same start-up software, but their software updates are cultural “
- Davis and Marcus, “Causal generative models are just a start”
- Dennet and Lambert, “Thinking like animals or like colleagues?”
- Hanson, Lampinen, Suriv, McClelland, “Building on prior knowledge without building it in”
- MacLennan, “Benefits of embodiment”
- Moerman, “The argument for single-purpose robots”
- Pierre-Yves Oudeyer, “Autonomous development annd learning in AI and robotics: Scaling up deep learning to human-like learning”
- Spelke and Blass, “Intelligent machines and human minds”
- Tessler, Goodman, Frank, “Avoiding frostbite: It helps to learn from others”
- Response, Lake, Ullman, Gershman, Tenenbaum, “Ingredients of intelligence: From classic debates to an engineering roadmap” (pg. 50-59)
4/25 Language and Culture
- Mikolov, T., Joulin, A. & Baroni, M. (2016) A roadmap towards machine intelligence. arXiv preprint 1511.08130.
- Lupyan, G. & Bergen, B. (2016) How language programs the mind. Topics in Cognitive Science 8(2):408–24.
- Tomasello, M., Kruger, A. C., & Ratner, H. H. (1993). Cultural learning. Behavioral and Brain Sciences, 16(3), 495-511.
5/2 Emotion and Egocentric learning
- Smith, L. B., & Slone, L. K. (2017). A developmental approach to machine learning?. Frontiers in psychology, 8, 2124.
- Bambach, S., Crandall, D., Smith, L., & Yu, C. (2018). Toddler-Inspired Visual Object Learning. In Advances in Neural Information Processing Systems.
- Ong, D., Soh, H., Zaki, J., & Goodman, N. (2019). Applying Probabilistic Programming to Affective Computing. IEEE Transactions on Affective Computing.
5/9 Final assignment presentations