ICML WORKSHOP
Programmatic Representations for Agent Learning
Sponsored by
Basis Logo
July 18th, 2025
West Meeting Room 301-305
Vancouver, Canada

This workshop explores using programmatic representations (e.g., code, symbolic programs, rules) to enhance agent learning and address key challenges in creating autonomous agents. By leveraging structured representations, we aim to improve interpretability, generalization, efficiency, and safety in agent systems, moving beyond the limitations of “black box” deep learning models. The workshop brings together researchers in sequential decision-making and program synthesis/code generation to discuss using programs as policies (e.g., LEAPS, Code as Policies, HPRL, RoboTool, Carvalho et al. 2024), reward functions (e.g., Eureka, Language2Reward, Text2Reward), skill libraries (e.g., Voyager), task generators (e.g., GenSim), or environment models (e.g., WorldCoder, Code World Models), ultimately driving progress toward robust, understandable, and adaptable autonomous agents across diverse applications.

Tentative Schedule

Location: West Meeting Room 301-305

Room Capacity: 710

TimeEvent
8:20 - 8:30Opening Remarks
8:30 - 9:00Invited Talk: Animesh Garg
9:00 - 9:30Invited Talk: Amy Zhang
9:30 - 10:00Coffee Break
10:00 - 10:15Oral Presentation: Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces
10:15 - 10:30Oral Presentation: Searching Latent Program Spaces
10:30 - 10:45Oral Presentation: Lifelong Experience Abstraction and Planning
10:45 - 11:00Sponsor Presentation - BASIS
11:00 - 11:30Invited Talk: Dale Schuurmans
11:30 - 12:00Invited Talk: Sheila McIlraith
12:00 - 13:00Lunch
13:00 - 14:00Poster Session 1
14:00 - 14:30Invited Talk: Jason Ma
14:30 - 15:00Invited Talk: Wenhao Yu
15:00 - 16:00Poster Session 2
16:00 - 16:15Coffee Break
16:15 - 17:00Panel Discussion
17:00 - 17:30Networking Session

All times are in Pacific Time (PT).

Speakers

Organizers

Call For Papers

Open Review

We invite the submission of research papers and position papers on the topic of programmatic representations for agent learning. This workshop aims to explore the use of program-like structures to represent policies, reward functions, tasks, and environment models.

Topics of interest include, but are not limited to:

  • Programs as Policies: Representing decision-making logic through programmatic policies in Python or domain-specific languages.
  • Programs as Reward Functions: Synthesizing reward function codes for agent learning.
  • Programs as Skill Libraries: Representing acquired skills as programs, allowing for reusing and composing skills.
  • Programmatically Generating Tasks: Producing codes that describe diverse task variants.
  • Programs as Environment Models: Inferring executable codes to simulate environment dynamics.

Submission Types:

  • Full Papers: Up to 9 pages in ICML or NeurIPS format, with potentially large-scale experiments.
  • Short Papers: 2-4 pages in ICML or NeurIPS format, with proof-of-concept demonstrations (demos, code, blog posts).

Important Dates:

  • Submission Deadline: May 24, 2025, AoE May 30, 2025, AoE
  • Author Notification: June 7, 2025, AoE June 13, 2025, AoE
  • Camera Ready Deadline: July 7, 2025, AoE
  • Workshop Date: July 18, 2025

Accepted papers will be presented during poster sessions, with exceptional submissions selected for spotlight oral presentations.

All accepted papers will be made publicly available as non-archival reports, allowing for future submissions to archival conferences or journals.

Please submit your papers to the Open Review site.

Camera Ready Instructions

Please incorporate reviewers’ feedbacks and prepare for your camera-ready submission. Please submit your camera-ready version on OpenReview. Your camera-ready submission should be de-anonymized, and include at most 9 pages for full papers, and 2-4 pages for short papers, excluding the references and appendices. The paper can be in ICML or NeurIPS formats, with footnote “ICML 2025 Workshop on Programmatic Representations for Agent Learning”.

Camera-Ready LaTeX Templates:

The camera-ready deadline is July 7, 2025, Anywhere on Earth (AoE).

Accepted Papers

Optimizing Agentic Architectures for Cybersecurity Tasks with Trace

Anish Chaudhuri, Prerit Choudhary, Max Piasevoli, Shannon Xiao, Allen Nie

Leveraging Learned Programmatic Facts for Enhanced LLM Agent Planning and World Modeling

Samuel Holt, Max Ruiz Luyten, Thomas Pouplin, Mihaela van der Schaar

FormulaCode: Evaluating Agentic Superoptimization on Large Codebases

Atharva Sehgal, James Hou, Swarat Chaudhuri, Jennifer J. Sun, Yisong Yue

PDL: Declarative Representation of Agentic Prompting Patterns

Mandana Vaziri, Louis Mandel, Martin Hirzel, Anca Sailer, Yuji Watanabe, Hirokuni Kitahara

Zero-Shot Instruction Following in RL via Structured LTL Representations

Mattia Giuri, Mathias Jackermeier, Alessandro Abate

EditLord: Learning Code Transformation Rules for Code Editing

Weichen Li, Albert Jan, Baishakhi Ray, Junfeng Yang, Chengzhi Mao, Kexin Pei

Time to Impeach LLM-as-a-Judge: Programs are the Future of Evaluation

Tzu-Heng Huang, Harit Vishwakarma, Frederic Sala

InstructFlow: Adaptive Symbolic Constraint-Guided Code Generation for Long-Horizon Planning

Haotian Chi, Zeyu Feng, Yueming Lyu, Chengqi Zheng, Linbo Luo, Yew-Soon Ong, Ivor Tsang, Hechang Chen, Yi Chang, Haiyan Yin

Sketch-Plan-Generalize : Learning and Planning with Neuro-Symbolic Programmatic Representations for Inductive Spatial Concepts

Namasivayam Kalithasan, Sachit Sachdeva, Himanshu Gaurav Singh, Vishal Bindal, Arnav Tuli, Gurarmaan Singh Panjeta, Harsh Himanshu Vora, Divyanshu Agarwal, Rohan Paul, Parag Singla

Discovering Logic-Informed Intrinsic Rewards to Explain Human Policies

Chengzhi Cao, Yinghao Fu, Chao Yang, Shuang Li

Searching Latent Program Spaces

Matthew Macfarlane, Clément Bonnet

Lifelong Experience Abstraction and Planning

Peiqi Liu, Jiayuan Mao, Leslie Pack Kaelbling, Joshua B. Tenenbaum

Afterburner: Reinforcement Learning Facilitates Self-Improving Code Efficiency Optimization

Mingzhe Du, Anh Tuan Luu, Yue Liu, Yuhao QING, Dong HUANG, Xinyi He, Qian Liu, Zejun MA, See-Kiong Ng

Interpretable Reward Modeling with Active Concept Bottlenecks

Sonia Laguna, Kasia Kobalczyk, Julia E Vogt, Mihaela van der Schaar

Weak-for-Strong: Training Weak Meta-Agent to Harness Strong Executors

Fan Nie, Lan Feng, Haotian Ye, Weixin Liang, Pan Lu, Huaxiu Yao, Alexandre Alahi, James Zou

Learning Game-Playing Agents with Generative Code Optimization

Zhiyi Kuang, Ryan Rong, YuCheng Yuan, Allen Nie

Learning to Discover Abstractions for LLM Reasoning

Yuxiao Qu, Anikait Singh, Yoonho Lee, Amrith Setlur, Ruslan Salakhutdinov, Chelsea Finn, Aviral Kumar

DyPO: Dynamic Policy Optimization for Multi-Turn Interactive Reasoning

Xiao Feng, Bo Han, Zhanke Zhou, Jiaqi Fan, Jiangchao Yao, Ka Ho Li, Dahai Yu, Michael Ng

ReasonRec: A Reasoning-Augmented Multimodal Agent for Unified Recommendation

Yihua Zhang, Xi Liu, Xihuan Zeng, Mingfu Liang, Jiyan Yang, Rong Jin, Wen-Yen Chen, Yiping Han, Bo Long, Huayu Li, Buyun Zhang, Liang Luo, Sijia Liu, Tianlong Chen

Inefficiencies of Meta Agents for Agent Design

Batu El, Mert Yuksekgonul, James Zou

Improving Parallel Program Performance with LLM Optimizers via Agent-System Interfaces

Anjiang Wei, Allen Nie, Thiago S. F. X. Teixeira, Rohan Yadav, Wonchan Lee, Ke Wang, Alex Aiken

This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License and is based on the Leela Interp project. That means you're free to borrow the source code of this website with attribution.