AutoXiv

Robust ML Under Uncertainty.

Research on developing machine learning methods that are robust, safe, and reliable when dealing with distributional uncertainty, limited data, privacy constraints, or safety-critical deployment scenarios.

4 papers

Papers.

260421.0050
Wasserstein Distributionally Robust Risk-Sensitive Estimation via Conditional Value-at-Risk
Taha · Bitar
This paper develops a distributionally robust estimation method that minimizes worst-case conditional value-at-risk (CVaR) of estimation error when the true distribution is uncertain but lies within a Wasserstein ball. The method can be computed via tractable semidefinite programming and outperforms existing approaches on electricity price forecasting.
Formal Sciences
260421.0062
Asset Harvester: Extracting 3D Assets from Autonomous Driving Logs for Simulation
Cao · Ren · Zhang +12
Asset Harvester converts sparse, real-world object observations from autonomous vehicle driving logs into complete 3D assets suitable for simulation. The system combines large-scale data curation, geometry-aware preprocessing, and a novel sparse-view-conditioned model (SparseViewDiT) to generate simulation-ready 3D objects from limited viewing angles.
Formal Sciences
260421.0064
Using large language models for embodied planning introduces systematic safety risks
Zhang · Qu · Li +4
This paper introduces DESPITE, a benchmark of over 12,000 tasks to evaluate safety in LLM-based robotic planning, revealing that even models with near-perfect planning ability produce dangerous plans 28% of the time. The study shows planning ability scales with model size while safety awareness remains relatively flat, creating a critical gap for deploying LLMs in robotics.
Formal Sciences
260421.0077
Tight Auditing of Differential Privacy in MST and AIM
Ganev · Annamalai · Kulynych
This paper introduces a GDP-based auditing framework that provides the first tight privacy audits for state-of-the-art differentially private synthetic data generators MST and AIM. The audits show a small gap between theoretical guarantees and empirical measurements in strong-privacy regimes.
Formal Sciences