Daily AI Paper Report (2026-04-27)
Published:
Chinese version: [中文]
Run stats
- Candidates: 4394
- Selected: 30
- Deepread completed: 30
- Window (UTC): 2026-04-24T00:00:00Z → 2026-04-25T00:00:00Z (weekend_backlog_sat, expanded=0)
Show selected papers
| arXiv ID | Title / Links | Categories | Score | Why | Tags |
|---|---|---|---|---|---|
2604.17745 | HiRAS: A Hierarchical Multi-Agent Framework for Paper-to-Code Generation and Execution | cs.CL | 90 | Hierarchical multi-agent paper-to-code reproduction + improved Paper2Code eval protocol (P2C-Ex). | agents, paper-to-code, reproducibility, evaluation, multi-agent, automation |
2604.21510 | OptiVerse: A Comprehensive Benchmark towards Optimization Problem Solving | cs.CL | 90 | OptiVerse: 1k optimization problems; 22 LLM eval shows big drop on hard tasks; strong benchmark value | benchmark, LLM-evaluation, optimization, reasoning, tool-use |
2604.19633 | Time Series Augmented Generation for Financial Applications | cs.AI, cs.CE | 90 | Benchmark for LLM agents doing verifiable financial time-series tool use; strong eval focus | agents, tool-use, evaluation, benchmarks, finance, verifiable-tools |
2604.21882 | Revisiting Non-Verbatim Memorization in Large Language Models: The Role of Entity Surface Forms | cs.CL | 90 | RedirectQA probes factual recall vs name/surface-form access; key for reliability & memorization evals | LLMs, memorization, factuality, evaluation, datasets, entity-linking, robustness |
2604.20572 | Ask Only When Needed: Proactive Retrieval from Memory and Skills for Experience-Driven Lifelong Agents | cs.CL | 90 | Proactive memory/skill retrieval for lifelong agents; strong agentic relevance and reusable framework. | agents, lifelong-learning, memory, retrieval, tool-use, online-learning |
2604.20621 | SoK: The Next Frontier in AV Security: Systematizing Perception Attacks and the Emerging Threat of Multi-Sensor Fusion | cs.CR | 88 | SoK of AV perception attacks incl. multi-sensor fusion threats; taxonomy + gaps for defenses. | security, autonomous-vehicles, perception-attacks, sensor-fusion, survey, robustness |
2604.21192 | How VLAs (Really) Work In Open-World Environments | cs.RO, cs.AI | 88 | Critiques VLA evals for hiding unsafe behaviors; proposes safety-relevant analysis in open-world | robotics, VLA, safety-evaluation, open-world, long-horizon, deployment |
2604.20711 | Participatory provenance as representational auditing for AI-mediated public consultation | cs.AI, cs.HC | 88 | Audits input-fidelity of AI summarization for public consultation via provenance/optimal transport metrics | auditing, summarization, governance, evaluation, optimal-transport, causal-inference, public-policy |
2604.21725 | AEL: Agent Evolving Learning for Open-Ended Environments | cs.CL, cs.AI, cs.CE | 88 | Two-timescale learning for long-horizon LLM agents: adaptive memory retrieval + reflection updates. | llm-agents, continual-learning, memory, retrieval-policy, reflection, bandits |
2604.19016 | AlignCultura: Towards Culturally Aligned Large Language Models? | cs.CL | 86 | UNESCO-grounded cultural alignment dataset/pipeline for HHH evaluation; useful for safety+fairness audits | cultural-alignment, evaluation, dataset, HHH, fairness, safety |
2604.21193 | Trust but Verify: Introducing DAVinCI -- A Framework for Dual Attribution and Verification in Claim Inference for Language Models | cs.AI | 86 | DAVinCI attributes+verifies claims with calibration; targets hallucinations and interpretability in LMs | factuality, hallucinations, verification, attribution, calibration, reliability |
2604.21579 | A Metamorphic Testing Approach to Diagnosing Memorization in LLM-Based Program Repair | cs.SE, cs.AI | 86 | Metamorphic testing to expose memorization/data leakage in LLM program repair evaluations. | data-leakage, memorization, evaluation, program-repair, metamorphic-testing, software-engineering |
2603.17883 | SoK: From Silicon to Netlist and Beyond $-$ Two Decades of Hardware Reverse Engineering Research | cs.CR | 86 | Comprehensive SoK on hardware reverse engineering; strong security relevance and reusable overview. | security, SoK, hardware, reverse-engineering, supply-chain, verification |
2604.12596 | KumoRFM-2: Scaling Foundation Models for Relational Learning | cs.LG, cs.AI | 86 | Foundation model for relational DBs; avoids flattening, supports ICL+finetune, temporal consistency. | foundation-models, relational-learning, in-context-learning, databases, tabular, pretraining |
2604.19087 | OLLM: Options-based Large Language Models | cs.AI | 86 | Latent “options” for next-token; controllable diversity/search with minimal params on pretrained LLMs | LLM, latent-variable, decoding, reasoning, controllability, efficient-adaptation |
2604.21232 | ReCAPA: Hierarchical Predictive Correction to Mitigate Cascading Failures | cs.AI | 86 | Hierarchical predictive correction for VLA agents to prevent cascading multi-step failures. | agents, VLA, planning, robustness, error-correction, multimodal |
2604.18356 | ComPASS: Towards Personalized Agentic Social Support via Tool-Augmented Companionship | cs.CL | 86 | Tool-augmented companionship + new benchmark for personalized social support; relevant to agent eval/safety. | agents, tool-use, evaluation, benchmarks, personalization, social-support |
2604.20511 | CHASM: Unveiling Covert Advertisements on Chinese Social Media | cs.LG, cs.AI, cs.CL, cs.CV, cs.CY | 85 | CHASM dataset for multimodal covert-ad detection; concrete, safety-adjacent eval data from real platform. | datasets, evaluation, multimodal, content-moderation, security, adversarial |
2604.21917 | CrossCommitVuln-Bench: A Dataset of Multi-Commit Python Vulnerabilities Invisible to Per-Commit Static Analysis | cs.CR, cs.SE | 84 | Benchmark of multi-commit CVEs that evade per-commit SAST; strong for secure code/agent tooling evals | security, benchmark, vulnerabilities, SAST, software-engineering, datasets |
2604.17816 | Privacy-Preserving Product-Quantized Approximate Nearest Neighbor Search Framework for Large-scale Datasets via A Hybrid of Fully Homomorphic Encryption and Trusted Execution Environment | cs.CR | 84 | Privacy-preserving ANN for embeddings using FHE+TEE; relevant to secure RAG/vector DBs | privacy, security, ANN, vector-search, FHE, TEE, RAG |
2604.19172 | Reasoning-Aware AIGC Detection via Alignment and Reinforcement | cs.AI | 84 | New multi-domain AIGC detection dataset + reasoning-chain detector trained with RL for robustness | AIGC-detection, datasets, robustness, reasoning, RL, misinformation, evaluation |
2604.21396 | VG-CoT: Towards Trustworthy Visual Reasoning via Grounded Chain-of-Thought | cs.CV, cs.AI | 84 | Automated dataset linking each visual reasoning step to image regions for trustworthy LVLM eval. | vision-language, grounding, chain-of-thought, trustworthiness, dataset, evaluation |
2604.11529 | TempusBench: An Evaluation Framework for Time-Series Forecasting | cs.LG | 84 | Needed TS foundation-model eval framework; tackles dataset leakage/metadata issues and standardization. | evaluation, benchmarks, time-series, foundation-models, data-contamination, forecasting |
2604.18459 | Progressive Online Video Understanding with Evidence-Aligned Timing and Transparent Decisions | cs.CV, cs.AI | 84 | Streaming video agent: evidence-aligned response timing + transparent decision maker under compute limits | video-LLM, online-inference, agent, transparency, evaluation, multimodal |
2604.05966 | FinReporting: An Agentic Workflow for Localized Reporting of Cross-Jurisdiction Financial Disclosures | cs.CL | 84 | Auditable agentic workflow with ontology mapping + anomaly logging for verified financial reporting. | agents, auditing, verification, ontology, information-extraction, LLM-workflows |
2604.21916 | MathDuels: Evaluating LLMs as Problem Posers and Solvers | cs.CL, cs.SE | 84 | Self-play math benchmark where models pose+solve; better capability separation than static tests. | evaluation, benchmarks, math, self-play, adversarial-testing, LLMs |
2604.08352 | Security Concerns in Generative AI Coding Assistants: Insights from Online Discussions on GitHub Copilot | cs.SE, cs.CR, cs.HC | 82 | Empirical security concerns for GenAI coding assistants from GitHub Copilot discussions. | LLM-security, coding-assistants, developer-practice, secure-software, human-factors |
2604.21416 | CSC: Turning the Adversary's Poison against Itself | cs.CR, cs.AI | 82 | Backdoor defense using latent-space cluster dynamics; targets poisoning without heavy utility loss | backdoors, data-poisoning, robustness, defense, security, representation-learning |
2604.18543 | ClawEnvKit: Automatic Environment Generation for Claw-Like Agents | cs.AI, cs.CL | 82 | Auto-generates and validates agent environments from NL; scalable eval/training infra for agents | agents, environment-generation, evaluation, benchmarks, validation, tool-interfaces |
2604.19090 | Dual-Guard: Dual-Channel Latent Watermarking for Provenance and Tamper Localization in Diffusion Images | cs.CR | 82 | Dual-channel watermarking for diffusion provenance + tamper localization; practical integrity angle | provenance, watermarking, diffusion, content-integrity, tamper-detection, forensics |
AI Paper Insight Brief
2026-04-27
0) Executive takeaways (read this first)
- Evaluation is becoming the bottleneck—and papers are responding with “auditability-first” frameworks: multiple works introduce benchmarks + protocols that explicitly target leakage, hallucinated evaluation, or missing safety signals (TempusBench, HiRAS/P2C-Ex, TSAG, OptiVerse/DVA-Agent, MathDuels, CHASM, B1K safety Q-scores).
- Agent progress is shifting from “more tools” to “better control of when/why to use tools/memory”: proactive retrieval as an explicit action with RL supervision (PROACTAGENT) and bandit-selected retrieval policies + reflection (AEL) both show large gains and strong ablation evidence that using experience is the key lever.
- Security research is emphasizing “system-level gaps” over isolated attacks: AV perception attacks are under-studied at the fusion layer (AV SoK + cross-modal spoofing PoC), and software security datasets highlight blind spots in common pipelines (CrossCommitVuln-Bench shows per-commit SAST misses multi-commit chains; Copilot discourse surfaces licensing/provenance and insecure-suggestion concerns).
- Provenance and integrity are converging on practical, deployable signals: diffusion-image watermarking moves beyond global provenance to tamper localization via dual latent channels (Dual-Guard), while public-consultation summarization is audited via input representational fidelity metrics (participatory provenance).
- Privacy-preserving retrieval is getting closer to interactive scale: a hybrid FHE+TEE + PQ design reports >50 QPS sequential encrypted ANN at Recall@10 > 0.9 on million-scale datasets (PPPQ-ANN), but still leaves access-pattern leakage open.
2) Key themes (clusters)
Theme: Benchmarks & evaluation that resist leakage, hallucinated scoring, and missing safety signals
- Why it matters: As models saturate static tests, the limiting factor becomes whether we can trust measured progress (no leakage, fair tuning, correct scoring, safety-aware metrics).
- Representative papers:
- TempusBench: An Evaluation Framework for Time-Series Forecasting
- HiRAS: A Hierarchical Multi-Agent Framework for Paper-to-Code Generation and Execution
- OptiVerse: A Comprehensive Benchmark towards Optimization Problem Solving
- A Metamorphic Testing Approach to Diagnosing Memorization in LLM-Based Program Repair
- Common approach:
- Build new datasets/benchmarks to avoid contamination (TempusBench “leakage-free” datasets; OptiVerse curated from textbooks/exams).
- Add protocol-level fixes for known evaluator failure modes (HiRAS’ Paper2Code-Extra boosts ref-free correlation with ref-based scoring).
- Measure robustness to meaning-preserving transformations as a memorization diagnostic (metamorphic testing for APR).
- Introduce auditor/verification agents to catch silent semantic errors (OptiVerse DVA-Agent: text→math vs code→math cross-check).
- Open questions / failure modes:
- How to keep benchmarks “fresh” once they become popular (TempusBench notes eventual contamination; proposes dynamic benchmarks).
- Reference-free evaluation remains fragile when execution/environment is the bottleneck (HiRAS shows execution failures can collapse scores).
- Metamorphic variants risk “unnaturalness” that may confound conclusions (APR MT paper notes this threat).
Theme: Proactive memory/retrieval as a learned action in lifelong agents
- Why it matters: Long-horizon agents fail from context overload or missing key experience; learning when to retrieve can improve both success and efficiency.
- Representative papers:
- Common approach:
- Treat retrieval as explicit control rather than passive RAG (PROACTAGENT adds retrieval to the action space).
- Provide step-level supervision for retrieval via counterfactual comparisons (paired-branch rollouts with/without retrieval in PROACTRL).
- Maintain typed memory/skills (facts, episodes, success/failure skills; AEL’s episodic→semantic→procedural tiers).
- Use lightweight online adaptation (AEL uses Thompson Sampling bandit over retrieval policies).
- Open questions / failure modes:
- PROACTRL assumes prefix replayability for paired rollouts; may break in stochastic/non-replayable environments.
- Memory growth and eviction policies are underdeveloped (PROACTAGENT notes no capacity cap / learned eviction).
- Credit assignment across modules remains hard (AEL finds more complex credit methods degrade performance in noisy regimes).
Theme: Agent environment generation & scalable agent evaluation
- Why it matters: Manual environment/benchmark creation doesn’t scale and becomes stale; auto-generation enables continuous, leakage-resistant evaluation.
- Representative papers:
- Common approach:
- Generate tasks from natural language specs with validation loops (ClawEnvKit Parser/Generator/Validator → executable sandbox tasks).
- Use self-play / co-evolving difficulty to avoid benchmark ceilings (MathDuels: models author + solve; Rasch/IRT ranking).
- Prefer deterministic checks where possible; cap LLM-judge influence (ClawEnvKit uses 15 deterministic checks + capped llm_judge weight).
- Open questions / failure modes:
- Sim-to-real gaps: mock services omit auth/rate limits/schema drift (ClawEnvKit limitation).
- Self-play still yields many non-discriminative items (MathDuels reports ~39% solved by all non-authors).
- Verifier dependence: automated verification still relies on model backbones in edge cases (MathDuels).
Theme: Provenance, integrity, and representational auditing
- Why it matters: Trust in AI outputs increasingly requires traceability—either to detect tampering (media) or to ensure summaries don’t erase minority views (governance).
- Representative papers:
- Common approach:
- Move beyond binary provenance to localized evidence (Dual-Guard produces block heatmaps via fused latent evidence).
- Audit transformations from inputs→outputs with distributional metrics (participatory provenance uses per-participant coverage + Wasserstein-2 gap).
- Provide operational tooling (Co-creation Provenance Lab; platform-side “Full mode” watermark verification).
- Open questions / failure modes:
- Dual-Guard depends on owner-side round-trip reference latents; adaptive white-box attacks are out of scope.
- Embedding dependence and proxy nature of coverage metrics can affect individual-level conclusions (participatory provenance reports rank correlation variability).
Theme: Security & privacy gaps in modern AI pipelines (hardware, AV fusion, code, retrieval)
- Why it matters: Real-world deployment fails at interfaces—supply chain trust, multi-sensor fusion, CI/CD assumptions, and privacy of embedding retrieval.
- Representative papers:
- SoK: From Silicon to Netlist and Beyond – Two Decades of Hardware Reverse Engineering Research
- SoK: The Next Frontier in AV Security: Systematizing Perception Attacks and the Emerging Threat of Multi-Sensor Fusion
- CrossCommitVuln-Bench: A Dataset of Multi-Commit Python Vulnerabilities Invisible to Per-Commit Static Analysis
- Privacy-Preserving PQ ANN via Hybrid FHE+TEE
- Common approach:
- Systematize fragmented fields and quantify reproducibility gaps (HRE SoK: only 7/187 reproducible from artifacts).
- Expose pipeline assumption failures (per-commit SAST misses cross-commit chains; AV fusion treated as defense but is attack surface).
- Hybridize security mechanisms for practicality (FHE+TEE+PQ to reach interactive PP-ANN throughput).
- Open questions / failure modes:
- Legal/benchmarking barriers block cumulative progress (HRE SoK highlights legal clarity + durable artifacts as core obstacles).
- AV fusion attacks need hardware-in-the-loop validation; current PoC is simulated.
- PP-ANN still doesn’t address access-pattern leakage; honest-but-curious threat model.
3) Technical synthesis
- Multiple papers converge on “auditable decomposition”: break systems into stages with explicit status labels/logs (FinReporting’s OK/MISSING/PARSE_ERROR; ClawEnvKit audit logs; participatory provenance metrics; DAVinCI attribution+verification).
- Counterfactual evaluation is emerging as a core tool: PROACTRL’s paired rollouts (retrieve vs not) mirrors OptiVerse’s dual-view (text→math vs code→math) and HiRAS’ focus on execution vs code-only scoring gaps.
- Benchmarks increasingly include hard negatives that look like positives (CHASM includes product-sharing non-ads; CrossCommitVuln-Bench requires individually benign commits; AIGC-text-bank includes AI-Polish).
- There’s a clear trend toward measuring what classic metrics miss:
- Timing deviation and transparency in streaming video (Thinking-QwenVL).
- Safety violations and non-target-object interactions in embodied tasks (sQ/seQ for B1K).
- Representational exclusion in summarization (coverage + W2 + concept recall/precision).
- Several works show that execution/environment is the dominant failure mode once code/text is plausible (HiRAS: import/env issues can drop scores dramatically).
- “Leakage-aware” evaluation appears in multiple domains: time-series (TempusBench), APR (metamorphic testing + NLL), and optimization (OptiVerse filtering web-accessible solutions + strict numeric verification).
- Tool-using systems are moving toward bounded decision spaces to reduce unsafe autonomy (FinReporting’s KEEP/REPAIR/NEED_REVIEW; ClawEnvKit’s deterministic checks + capped judge weight).
- Security SoKs highlight a meta-problem shared with ML: artifact scarcity and fragility (HRE reproducibility 4%) parallels concerns in agent benchmarks about staleness/leakage and in evaluation about hallucinated judges.
4) Top 5 papers (with “why now”)
1) ClawEnvKit: Automatic Environment Generation for Claw-Like Agents
- Automates environment creation from NL specs into executable tasks E=(P,M,C) with validation/regeneration loops.
- Releases Auto-ClawEval (1,040 tasks) and shows no model saturates completion (34%–76%), making it useful for frontier tracking.
- Demonstrates harness matters: structured harnesses beat ReAct by up to 15.7 points.
- Skepticism: mock services may not transfer to real APIs (auth, schema drift, rate limits).
- Makes retrieval a first-class action and trains it with paired-branch rollouts to get step-level supervision.
- Large ablation signal: removing PROACTRL drops SciWorld SR 73.50% → 26.50%.
- Shows efficiency gains (fewer rounds/tokens) alongside success improvements.
- Skepticism: relies on replayability assumptions; memory scaling/eviction not solved.
3) Participatory provenance as representational auditing for AI-mediated public consultation
- Introduces concrete metrics (coverage, W2 gap, AIPW causal attribution, concept fidelity) to audit input→summary representational fidelity.
- Empirical finding: official summaries underperform random baselines on coverage and exclude ~15–17% of participants, concentrated in dissent clusters.
- Provides an operational tool (Co-creation Provenance Lab) for audit-and-revision workflows.
- Skepticism: embedding-based proxies and embedding choice sensitivity can affect individual-level rankings.
- Combines a robust global provenance anchor (GS in z_T) with a learned spatial codec (in z0) to localize tampering.
- Reports near-perfect closed-set provenance discrimination and ≥99.9% detection across multiple edit/tamper scenarios; provides coarse localization (16×16 blocks).
- Clear complementarity: GS-only misses local edits; dual-channel fixes this.
- Skepticism: depends on owner-side round-trip reference latents; adaptive white-box attacks not evaluated.
5) OptiVerse: A Comprehensive Benchmark towards Optimization Problem Solving
- Broadens optimization evaluation to six domains and shows severe headroom (top models ~25–27% on Hard).
- Identifies dominant failure mode as semantic modeling/logic errors (code runs but is wrong).
- DVA-Agent’s dual-view auditing improves Hard accuracy (e.g., 16.67% → 24.33% for Qwen3-235B-Instruct) while triggering edits only ~23–32% of cases.
- Skepticism: textbook/exam distribution may not reflect messy industrial optimization; evaluation pipeline is compute-heavy.
5) Practical next steps
- For agent builders: implement retrieval-as-action and train it with counterfactual rollouts (retrieve vs not) to get step-level supervision; track both success and efficiency (rounds/tokens).
- For benchmark owners: add metamorphic variants (semantics-preserving transforms) and report robustness deltas; pair with familiarity proxies where available (e.g., NLL for open models).
- For evaluation pipelines: adopt dual-view auditing for semantic correctness (spec→formalization vs code→formalization) in any domain where “runs” ≠ “correct” (optimization, data pipelines, ETL, policy rules).
- For safety in embodied/VLA: extend metrics beyond terminal success—log handling/placement violations and non-target interactions (sQ/seQ-style) and require reporting of trial-to-trial variance.
- For provenance/integrity: if you operate a platform, consider closed-set verification designs (registered artifacts) and add localization signals, not just binary provenance.
- For privacy-preserving retrieval: prototype hybrid designs (PQ + cryptography + enclave) but explicitly measure what remains leaked (e.g., access patterns) and document threat model boundaries.
- For security tooling: evaluate SAST/CI on cross-commit chains (not just snapshots) and add history-aware checks; use datasets like CrossCommitVuln-Bench to quantify gaps.
Generated from per-paper analyses; no external browsing.
