Artificial intelligence allows for alternative ways to tackle the collective challenges we face concerning the environment. In particular, the United Nations Sustainable Development Goals (SDGs) (see https://sdgs.un.org/goals) and the Leave No One Behind Principle (LNOB) (see https://unsdg.un.org/2030-agenda/universal-values/leave-no-one-behind) are an urgent call for action where the scientific community has an important role to play

This special track is dedicated to research triggered by real-world key questions, is carried out in collaboration with civil society stakeholders, and uses AI to work towards the SDGs and LNOB. The track aims to encourage the application of AI to solve current global and local challenges and to strengthen the civil society-science-policy interface. Multidisciplinary research, including computer, social and natural sciences, as well as multilateral collaborations with NGOs, community organizations and government agencies, is, therefore, an essential characteristic of the submissions to this track.

We invite two types of contributions: research papers and research project proposals. The authors who wish to submit demos that are relevant to the theme of AI and Social Good are invited to submit them via the IJCAI 2023 Demo track (see below).

Three primary selection criteria for all AI and Social Good submissions will be 1) the scientific quality of the work and contribution to AI and social or natural sciences state of the art, 2) the relevance and impact to the SDGs and LNOB principle, and 3) the collaboration with civil society stakeholders that have first-hand knowledge of the topic (taking into consideration the location of IJCAI 2023 Conference in Macao SAR, team work with local and regional African NGOs will be especially appreciated). All submissions should clearly state what specific real-world challenge is being tackled, identify or refer to the expert stakeholders that provide in-the-field know-how, explain how the topic relates to SDGs and LNOB principle and what is the contribution to the AI, social or natural sciences state of the art.

Also, just as the research papers submitted to the main track of IJCAI 2023, papers in this track should be anonymous. Unlike for papers in the main track, there will be no author rebuttal, and no summary reject phase. Accepted research papers in the AI for Good track will be included in the IJCAI proceedings. An award will be given to honour outstanding research papers in this track.

Submission of Research Papers

Research papers should have the same format  (7 pages + 2 pages for references) and follow the same general instructions as for the main conference (see https://ijcai-23.org/call-for-papers/). Technical appendices (which can include, but are not restricted to, datasets and code) are allowed. Papers are expected to satisfy the highest scientific standards, just like regular submissions to the main track of IJCAI 2023. In addition, the research papers in this track are expected to provide multidisciplinary scientific contributions towards the UN SDGs and LNOB principle and refer to the work performed by or with NGOs, community organizations and government agencies. The presentation of case studies is highly encouraged in this track as a way to demonstrate the practical contribution to specific SDGs and LNOB principles and provide examples of how to translate global goals into local actions.

Also, just as the research papers submitted to the main track of IJCAI 2023, papers in this track should be anonymous. Unlike for papers in the main track, there will be no author rebuttal, and no summary reject phase. Accepted research papers in the AI for Good track will be included in the IJCAI proceedings. An award will be given to honour outstanding research papers in this track.

Submission of Research Project Proposals

This specific mechanism of the AI and Social Good Track goes beyond research objectives. Research project proposals are expected to connect the dots between NGOs, academic research and governmental agencies to establish collaborative work lines. Submissions in this category can vary from incipient project ideas to projects under development as long as they are based on in-the-field know-how, have a multi-lateral teamwork approach and have a clear implementation plan for the real-world impact on the UN SDGs and LNOB principle.

Research project proposals are not envisioned to present research results in this edition of IJCAI. Still, selected proposals will be expected to report on their progress during the three subsequent editions of IJCAI (2024 – 2026) and submit a research paper to one of these conference editions. In addition, selected research project proposals from the AI and Social Good track will be presented to a panel of government officials, NGOs and private companies at IJCAI 2023 to receive feedback on the project implementation plan and follow-up actions.

We suggest the following structure for research project proposals:  problem statement (the real-world challenge that the project is aiming to solve), link to specific SDGs and LNOB principle, strategy, methods, potential case studies, expected results, evaluation criteria, challenges and limitations, ethical considerations, implementation plan and needs, project team description.  The selection criteria will include the team’s expertise concerning the challenge and technology, the feasibility of the project implementation plan, contribution to the SDGs and LNOB principle, and contribution to state of the art in AI, social or natural sciences. Research project proposals should follow the same format (7 pages + 2 pages of references) and general instructions as the main track submissions (https://ijcai-23.org/call-for-papers/), with the following exception: unlike papers submitted to the main track, research project papers must not be anonymous and must include a 1-page appendix not included in the page count, with short CVs of all team members. In addition, technical appendices (which can include, but are not restricted to, datasets and code) are allowed. Unlike for papers in the main track, there will be no author rebuttal, and no summary reject phase. Accepted research project proposals will be published in the IJCAI 2023 proceedings, just like traditional technical papers. An award will be given to honour outstanding research project proposals in this track.

Submission of Demos

Unlike in 2022, the authors will not be able to submit demos directly to the AI and Social Good track. However, they are invited to submit demos relevant to this special track’s topic to the IJCAI 2023 Demo track, indicating the “AI and Social Good” nature of the demo within the submission procedure.

Important Dates:

  • Submission site opening: January 4, 2023
  • Paper submission deadline: March 1, 2023
  • Notification of acceptance/rejection: April 19, 2023

Note: bearing in mind the multilateral characteristics of this track, a later submission deadline has been defined (as compared to the general track).

All deadlines are Anytime on Earth.

Formatting guidelines: LaTeX styles and Word template: https://www.ijcai.org/authors_kit

Submission site: papers should be submitted to https://cmt3.research.microsoft.com/IJCAI2023 by choosing “AI for Good” from the drop-down menu.

Track chairs:

Amir Banifatemi

Georgina Curto Rex

Frank Dignum

Nardine Osman

Enquiries: the track chairs can be reached at aiforgood@ijcai-23.org

Clarification on Large Language Model Policy LLM

We (Program Chairs) want to make the following statement with respect to the use of LLM for papers to the AI for Social Good track at IJCAI 2023:

Authors that use text generated by a large-scale language model (LLM) such as ChatGPT should state so in the paper. They are responsible for the complete text and the theoretical and factual correctness thereof. This includes references to other papers and appendices. They are also responsible to make sure text is not plagiarized by using these LLM’s.

We would like to clarify further the intention behind this statement and how we plan to implement this policy for the AI for Social Good track at IJCAI 2023.;

Intention

During the past few years, we have observed and been part of rapid progress in large-scale language models (LLM), both in research and deployment. This progress has not slowed down but only sped up during the past few months. As many, including ourselves, have noticed, LLMs released in the past few months, such as OpenAI’s chatGPT, are now able to produce text snippets that are often difficult to distinguish from human-written text. Undoubtedly this is exciting progress in natural language processing and generation.

Such rapid progress often comes with unanticipated consequences as well as unanswered questions. As we have already seen during the past few weeks alone, there is, for instance, a question on whether text, as well as images generated by large-scale generative models, are considered novel or mere derivatives of existing work. There is also a question on the ownership of text snippets, images or any media sampled from these generative models: which one of these owns it, a user of the generative model, a developer who trained the model, or content creators who produced training examples? It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted. However, we do not yet have any clear answers to any of these questions.

Since how we answer these questions directly affects our reviewing process, which in turn affects members of our research community and their careers, we want to make known our position in considering this new technology. OpenAI released the beta version of ChatGPT at the end of November 2022. Unfortunately, we have not had enough time to observe, investigate and consider its implications for our reviewing and publication process. We decided to not prohibit producing/generating text using large-scale language models this year (2023), but rather put the responsibility on the authors to use it in a way that the ideas of the paper are still clearly and completely those of the authors.  

We plan to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of AI. This decision will be revisited for future iterations of the AI for Social Good track.

Implementation and Enforcement

As we are well aware, it is difficult to detect whether any given text snippet was produced by a language model. The AI for Social Good PC team does not plan to implement any automated or semi-automated system to be run on submissions to check for the LLM policy this year (2023). Instead, we plan to investigate any potential violation of the LLM policy when a submission is brought to our attention with a significant concern about a potential violation. Any submission flagged for the potential violation of this LLM policy will go through the same process as any other submission flagged for plagiarism.

As we learn more about the consequences and impacts of LLMs in academic publications, and as we redesign the LLM policy in future conferences (after 2023), we will consider different options and technologies to implement and enforce the latest LLM policy in future iterations.