In this talk I will take a developmental perspective on the problem of modeling open-ended skill learning in artificial agents. I’ll argue that such agents need to be both autotelic and social — i.e. intrinsically motivated to represent and pursue their own goals, but still learning within human cultures. I will develop this argument by presenting several learning architecture I developed during my PhD. In the second part of the talk, I’ll discuss more recent and ongoing projects that aim at leveraging natural human feedback to teach artificial agents in an efficient way by leveraging a program-induction perspective on learning and reasoning.