There is a growing interest in developing artificial intelligence techniques demonstrating creative capabilities that can have a role in producing creative results or solving complex problems. Research in this field has already produced AI systems that can generate creative artefacts or behaviour in fields such as music, visual arts, storytelling, games, architecture, design and scientific discovery autonomously or in collaboration with humans.

The IJCAI 2023 AI, Arts & Creativity special track aims to explore the relationships between AI and the arts, creativity and creative practice.

See the Call for Papers here: https://ijcai-23.org/call-for-papers-ai-the-arts-and-creativity/

Paper Schedule

Thursday 24th August 11:45-12:45 AI and Arts: Arts, Design and Crafts

Session Chair: Philippe Pasquier, (Simon Fraser University)

1142TeSTNeRF: Text-Driven 3D Style Transfer via Cross-Modal LearningJiafu Chen, Boyan Ji, Zhanjie Zhang, Tianyi Chu, Zhiwen Zuo, Lei Zhao, Wei Xing, Dongming Lu
1472Collaborative Neural Rendering Using Anime Character SheetsZuzeng Lin, Ailin Huang, Zhewei Huang
5112Learn and Sample Together: Collaborative Generation for Graphic Design LayoutHaohan Weng, Danqing Huang, Tong Zhang, Chin-Yew Lin
5558Automating Rigid Origami DesignJeremia Geiger, Karolis Martinkus, Oliver Richter, Roger Wattenhofer
2515IberianVoxel: Automatic Completion of Iberian Ceramics for Cultural Heritage StudiesPablo Navarro, Celia Cintas, Manuel Lucena, José Manuel Fuertes, Antonio Rueda, Rafael Segura, Carlos Ogayar-Anguita, Rolando González-José, Claudio Delrieux
5568Towards Symbiotic Creativity: a Methodological Approach to Compare Human and AI Robotic Dance CreationsAllegra De Filippo, Luca Giuliani, Eleonora Mancini, Andrea Borghesi, Paola Mello, Michela Milano

Thursday 24th August 15:30-16:50 AI and Arts: Sound and Music

Session Chair: Celia Cintas, (IBM Research Africa)

4350Musical Voice Separation as Link Prediction: Modeling a Musical Perception Task as a Multi-Trajectory Tracking Problem (from the Main Track)Emmanouil Karystinaios, Francesco Foscarin, Gerhard Widmer
5605The __name__: Combining Reactivity, Robustness, and Musical Expressivity in an Automatic Piano AccompanistCarlos Cancino-Chacón, Silvan Peter, Patricia Hu, Emmanouil Karystinaios, Florian Henkel, Francesco Foscarin, Gerhard Widmer
5607Discrete Diffusion Probabilistic Models for Symbolic Music GenerationMatthias Plasser, Silvan Peter, Gerhard Widmer
5672Graph-based Polyphonic Multitrack Music GenerationEmanuele Cosenza, Andrea Valenti, Davide Bacciu
5652Q&A: Query-Based Representation Learning for Multi-Track Symbolic Music re-ArrangementJingwei Zhao, Gus Xia, Ye Wang
5448Evaluating Human-AI Interaction with MMM-Cubase: A Creative AI System for Music CompositionRenaud Bougueng Tchemeube, Jeffrey Ens, Cale Plut, Philippe Pasquier, Maryam Safi, Yvan Grabit, Jean-Baptiste Rolland
1743NAS-FM: Neural Architecture Search for Tunable and Interpretable Sound Synthesis based on Frequency ModulationZhen Ye, Wei Xue, Xu Tan, Qifeng Liu, Yike Guo
5508DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion ModelsSicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, Long Xiao

Panel

Thursday 24th August 17:15-18:30
AI and Arts Panel: What are the upcoming challenges for AI-assisted creativity?

Moderator: F. Amilcar Cardoso, University of Coimbra

Panelists:

Carlos Cancino-Chacón, Johannes Kepler University

Bio
Carlos Cancino-Chacón is an Assistant Professor at the Institute of Computational Perception, Johannes Kepler University, Linz, Austria. He has previously been a Guest Researcher at the RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway and a Postdoctoral Researcher at the Austrian Research Institute for Artificial Intelligence. His research focuses on applying artificial intelligence and machine learning to studying expressive music performance, musical human-computer interaction, music cognition, and music theory. He received a doctoral degree in Computer Science at the Institute of Computational Perception of the Johannes Kepler University Linz, a masters degree in Electrical Engineering and Audio Engineering from the Graz University of Technology, an undergraduate degree in Physics from the National Autonomous University of Mexico, and an undergraduate degree in Piano Performance from the National Conservatory of Music of Mexico.

Celia Cintas, IBM Research Africa

Bio
Celia Cintas is a Research Scientist at IBM Research Africa – Nairobi. She is a member of the AI Science team at the Kenya Lab.  Her current research explores subset scanning for anomalous pattern detection under generative models; she is also interested in robustness and fairness in deep learning models. Previously, grantee from the National Scientific and Technical Research Council (CONICET), working on Deep Learning for populations studies at LCI-UNS and IPCSH-CONICET (Argentina) as part of the Consortium for Analysis of the Diversity and Evolution of Latin America (CANDELA).  During her Ph.D., she was a visitor student at the University College of London (UK). She was also a Postdoc researcher visitor at Jaén University (Spain), applying ML to Heritage and Archeological studies.  She holds a Ph.D. in Computer Science from Universidad del Sur (Argentina). https://celiacintas.io/.

Jivko Sinapov, Tufts University

Bio
Jivko Sinapov is an assistant professor in Computer Science at Tufts University where he leads the Multimodal Learning, Interaction, and Perception (MuLIP) lab (https://mulip.cs.tufts.edu). He received his Ph.D. in computer science and human-computer interaction at Iowa State University in 2013 and subsequently worked as a postdoctoral associate at UT Austin prior to joining Tufts in 2017. His research interests include cognitive and developmental robotics, creative problem solving, human-robot interaction, and reinforcement learning. Jivko received the NSF CAREER award in 2023 and is also the recipient of the Tufts ROUTE award for undergraduate research advising in 2022.

Philippe Pasquier, Simon Fraser University

Bio
Philippe Pasquier is a professor at Simon Fraser University’s School of Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI. He leads a research-creation program around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence since 1999, a software designer, a multidisciplinary media artist, an educator, and a community builder. Pursuing a multidisciplinary research-creation program, his contributions bridge fundamental research on generative systems, machine learning, affective computing, and computer-assisted creativity, with applied research in the creative software industry, and artistic practice in interactive and generative art.

Organization

Track chairs

F. Amilcar Cardoso, University of Coimbra
Anna Kantosalo, University of Helsinki
François Pachet, Spotify

Program Committee

  • Allegra De Filippo (University of Bologna)
  • Aniket Bera (University of Maryland)
  • Anna Jordanous (University of Kent)
  • Antonio Chella (Università degli Studi di Palermo)
  • Antonio Lieto (University of Turin and ICAR-CNR)
  • Ashok Goel (Georgia Institute of Technology)
  • Bob L. T. Sturm (KTH Royal Institute of Technology)
  • Carlos León (Universidad Complutense de Madrid)
  • Colin G. Johnson (University of Nottingham)
  • Dan Ventura (Brigham Young University)
  • Emilios Cambouropoulos (Aristotle University of Thessaloniki)
  • Enric Plaza (IIIA, CSIC)
  • Filipe Calegario (Universidade Federal de Pernambuco)
  • Georgios N. Yannakakis (University of Malta)
  • Geraint A. Wiggins (Vrije Universiteit Brussel)
  • Gerardo I. Simari (Universidad Nacional del Sur in Bahia Blanca and CONICET)
  • Gerhard Widmer (Johannes Kepler University)
  • H.Sofia Pinto (INESC-ID)
  • Hannu Toivonen (Helsinki University)
  • Hugo G. Oliveira (University of Coimbra)
  • Jean-Pierre Briot (CNRS)
  • João Correia (University of Coimbra)
  • Jon McCormack (Monash University)
  • Juan Jose Bosch (Spotify)
  • Julian Togelius (New York University)
  • Kazjon Grace (University of Sydney)
  • Kıvanç Tatar (Chalmers University of Technology)
  • Kristin Carlson (Illinois State University)
  • Lorenzo Porcaro (EC Joint Research Centre)
  • Marc Cavazza (National Institute of Informatics)
  • Marco Schorlemmer (IIIA, CSIC)
  • Maria Teresa Llano (Monash University)
  • Mark d’Inverno (Goldsmiths, University of London)
  • Mark Riedl (Georgia Institute of Technology)
  • Matthew Yee-King (Goldsmiths, University of London)
  • Mikhail Jacob (Resolution Games)
  • Oliver R. Bown (University of New South Wales)
  • Pablo Gervás (Universidad Complutense de Madrid)
  • Pedro Martins (University of Coimbra)
  • Penousal Machado (University of Coimbra)
  • Philippe Pasquier (Simon Fraser University)
  • Pierre Roy (Spotify)
  • Rob Saunders (Leiden University)
  • Roberto Confalonieri (University of Padua)
  • Shlomo Dubnov (UC San Diego)
  • Simo M. Linkola (University of Helsinki)
  • Tapio Takala (Aalto University)
  • Tarek R. Besold (Sony AI)
  • Tony Veale (University College Dublin)
  • Valentina Presutti (University of Bologna)