[인공지능 논문 Review - 03]Meta-learning with implicit gradients
1. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh (2019),
Proceedings of the Thirty-Sixth International Conference on Machine Learning (ICML-2019),
Long Beach, California, USA, June 9-15, 2019.
(earlier version in preprint arXiv:1810.00825 )
2. Juho Lee, Lancelot James, Seungjin Choi, and François Caron (2019),
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS-2019),
Naha, Okinawa, Japan, April 16-18, 2019. (oral)
(earlier version in preprint arXiv:1810.01778 )
3. Yoonho Lee and Seungjin Choi (2018),
in Proceedings of the Thirty-Fifth International Conference on Machine Learning (ICML-2018),
Stockholm, Sweden, July 10-15, 2018.
(earlier version in preprint arXiv:1810.05558 )
4. Saehoon Kim, Jungtaek Kim, and Seungjin Choi (2018),
in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-2018),
New Orleans, Louisiana, USA, February 2-7, 2018.
5. Juho Lee, Creighton Heaukulani, Zoubin Ghahramani, Lancelot James, and Seungjin Choi (2017),
in Proceedings of the International Conference on Machine Learning (ICML-2017),
Sydney, Australia, August 6-11, 2017.
(earlier version in preprint arXiv:1702.08239 )
6. Saehoon Kim and Seungjin Choi (2017),
in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-2017),
San Francisco, California, USA, February 4-9, 2017.
[Meta-learning with implicit gradients]
In my previous postings, I introduced two exemplary gradient-based meta-learning models initiated by MAML,
including MT-nets and warped gradients.
Today, I add another one equipped with implicit gradients,
which scales up MAML.
In MAML, updating meta-parameters requires backpropagating through the dynamics of gradient descent.
MAML with implicit gradients (iMAML) employs the proximal regularization in the inner-level optimization,
allowing the outer-level optimization simply depends on the inner-level solution and does NOT count on the path taken by the inner loop optimizer
[A. Rajeswaran et al., 2019].
논문보기 링크↓
https://papers.nips.cc/paper/8306-meta-learning-with-implicit-gradients.pdf?fbclid=IwAR3AZItWwf3oSH4dMdvRc8MChgI7mS3gzGL1ONqTGZ_brgaZ75G8Qb0cH4g