Daily AI Paper Report (2026-04-15)

Published:

Chinese version: [中文]

Run stats

  • Candidates: 306
  • Selected: 30
  • Deepread completed: 30
  • Window (UTC): 2026-04-13T00:00:00Z → 2026-04-14T00:00:00Z (arxiv_announce, expanded=0)
Show selected papers
arXiv IDTitle / LinksCategoriesScoreWhyTags
2604.11790ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection
PDF
cs.CR, cs.AI95Runtime, deterministic guardrails at tool boundaries to stop indirect prompt injection in agentsagent-security, tool-use, prompt-injection, runtime-enforcement, auditing, sandboxing
2604.11072Hodoscope: Unsupervised Monitoring for AI Misbehaviors
PDF
cs.AI95Unsupervised monitoring to surface novel agent misbehaviors beyond predefined rules/judges.agent-safety, monitoring, unsupervised, anomaly-detection, evaluation
2604.11806Detecting Safety Violations Across Many Agent Traces
PDF
cs.AI, cs.CL93Scalable auditing: finds rare/adversarial safety violations only visible across many agent tracesauditing, monitoring, agent-traces, red-teaming, clustering, safety-eval
2604.11322Do LLMs Know Tool Irrelevance? Demystifying Structural Alignment Bias in Tool Invocations
PDF
cs.CL, cs.AI93Finds tool-refusal flaw; introduces SABEval to isolate structural vs semantic tool relevance.tool-use, agents, safety, evaluation, dataset, robustness
2604.11259Mobile GUI Agent Privacy Personalization with Trajectory Induced Preference Optimization
PDF
cs.AI, cs.CR92Preference optimization for privacy-personalized mobile GUI agents with heterogeneous trajectories.agents, privacy, preference-optimization, mobile, security
2604.10988WebForge: Breaking the Realism-Reproducibility-Scalability Trilemma in Browser Agent Benchmark
PDF
cs.AI, cs.CV92Automated, scalable browser-agent benchmark resolving realism/reproducibility; strong eval utility.agents, benchmarks, browser-agents, evaluation, automation, web
2604.11061Pando: Do Interpretability Methods Work When Models Won't Explain Themselves?
PDF
cs.LG, cs.AI91Benchmark isolates interpretability signal from elicitation; tests when models mis/avoid explainingmechanistic-interpretability, evaluation, model-organisms, faithfulness, alignment-auditing
2604.11623Context Kubernetes: Declarative Orchestration of Enterprise Knowledge for Agentic AI Systems
PDF
cs.AI, cs.SE91Enterprise agent knowledge orchestration w/ permissions+freshness; shows leakage/phantom-content issues.agentic-systems, permissions, governance, RAG, security, deployment
2604.11581Decomposing and Reducing Hidden Measurement Error in LLM Evaluation Pipelines
PDF
cs.CL90Decomposes hidden uncertainty in LLM eval pipelines; shows design-choice variance can flip rankingsevaluation, measurement-error, judge-models, prompt-variance, reproducibility, safety-standards
2604.11201CocoaBench: Evaluating Unified Digital Agents in the Wild
PDF
cs.CL, cs.AI90Benchmark for unified digital agents requiring long-horizon composition of vision/search/coding.agents, benchmark, evaluation, long-horizon, tool-use
2604.11174EmbodiedGovBench: A Benchmark for Governance, Recovery, and Upgrade Safety in Embodied Agent Systems
PDF
cs.RO, cs.AI89Governance-focused embodied-agent benchmark: controllability, policy bounds, recovery, auditabilityembodied-agents, governance, oversight, recovery, audit-trails, benchmark
2604.11304BankerToolBench: Evaluating AI Agents in End-to-End Investment Banking Workflows
PDF
cs.AI89High-fidelity benchmark for end-to-end investment banking workflows with real tools/deliverables.agents, benchmark, evaluation, tool-use, real-world-tasks
2604.11641CodeTracer: Towards Traceable Agent States
PDF
cs.SE, cs.AI89Tracing architecture for agent state transitions/error chains; improves debugging, auditing, reliability.agents, observability, tracing, debugging, code-agents, monitoring
2604.11307PaperScope: A Multi-Modal Multi-Document Benchmark for Agentic Deep Research Across Massive Scientific Papers
PDF
cs.AI88Multimodal multi-document benchmark for agentic deep research over papers incl. tables/figures.agents, benchmark, multimodal, scientific-reasoning, retrieval
2604.11182Evaluating Memory Capability in Continuous Lifelog Scenario
PDF
cs.CL88LifeDialBench + online temporal-causality protocol for real lifelog memory; reduces temporal leakage.memory, long-context, evaluation, benchmark, online-eval, agents
2604.11036Uncertainty-Aware Web-Conditioned Scientific Fact-Checking
PDF
cs.CL, cs.AI88Uncertainty-gated web retrieval for scientific fact-checking; targets hallucination and grounding.fact-checking, uncertainty, grounding, retrieval, hallucinations, evaluation
2604.11120Persona Non Grata: Single-Method Safety Evaluation Is Incomplete for Persona-Imbued LLMs
PDF
cs.AI87Shows persona safety differs for prompting vs activation steering; single-method eval misses riskssafety-evaluation, personas, activation-steering, jailbreaks, robustness
2604.11662Hidden Failures in Robustness: Why Supervised Uncertainty Quantification Needs Better Evaluation
PDF
cs.CL87Large study shows uncertainty probes fail under shift; calls for better UQ/hallucination eval.uncertainty, hallucinations, robustness, OOD, evaluation
2604.11309The Salami Slicing Threat: Exploiting Cumulative Risks in LLM Systems
PDF
cs.CR, cs.AI, cs.CL, cs.CV, cs.LG86Multi-turn jailbreak via cumulative 'salami slicing' risk; highlights covert escalation failuresjailbreaks, multi-turn-attacks, cumulative-risk, adversarial-prompting, security
2604.11784ClawGUI: A Unified Framework for Training, Evaluating, and Deploying GUI Agents
PDF
cs.LG, cs.AI, cs.CL, cs.CV86Open-source full-stack GUI agent framework: RL infra + stable eval + deployment to real devices.GUI-agents, RL, evaluation, infrastructure, deployment, reproducibility
2604.10966You Only Judge Once: Multi-response Reward Modeling in a Single Forward Pass
PDF
cs.CV, cs.AI86Single-pass multi-response reward modeling + new N-way benchmarks; cheaper preference learning.reward-modeling, RLHF, preference-learning, efficiency, benchmarks, multimodal
2604.11557UniToolCall: Unifying Tool-Use Representation, Data, and Evaluation for LLM Agents
PDF
cs.AI85Unifies tool-use representations + 22k tools + 390k instances; improves comparability for agentstool-use, agents, datasets, evaluation, function-calling, standardization
2604.11523PAC-BENCH: Evaluating Multi-Agent Collaboration under Privacy Constraints
PDF
cs.AI, cs.MA84Benchmark for multi-agent collaboration under privacy constraints; surfaces failure modes + metricsmulti-agent, privacy, collaboration, benchmark, coordination-failures, hallucinations
2604.11419Beyond RAG for Cyber Threat Intelligence: A Systematic Evaluation of Graph-Based and Agentic Retrieval
PDF
cs.AI, cs.CR84Systematic eval of graph-based vs agentic retrieval for cyber threat intelligence QA.RAG, retrieval, knowledge-graphs, cybersecurity, evaluation
2604.11094E2E-REME: Towards End-to-End Microservices Auto-Remediation via Experience-Simulation Reinforcement Fine-Tuning
PDF
cs.SE, cs.AI84MicroRemed benchmark + RL fine-tuning for end-to-end LLM remediation generating executable playbooks.agents, autonomous-remediation, benchmark, RLHF/RLFT, reliability, devops
2604.11611Utilizing and Calibrating Hindsight Process Rewards via Reinforcement with Mutual Information Self-Evaluation
PDF
cs.CL, cs.LG84Formal + practical method for calibrated self-reward (hindsight) to densify RL for LLM agents.LLM-agents, reinforcement-learning, self-reward, calibration, theory, mutual-information
2604.11778General365: Benchmarking General Reasoning in Large Language Models Across Diverse and Challenging Tasks
PDF
cs.CL, cs.AI83General365 benchmark targets broad 'general reasoning' decoupled from specialized knowledge.reasoning, benchmark, evaluation, generalization
2604.11012Min-$k$ Sampling: Decoupling Truncation from Temperature Scaling via Relative Logit Dynamics
PDF
cs.AI, cs.CL, cs.LG83Min-k decoding reduces temperature sensitivity via logit-shape 'semantic cliffs'; practical gen quality lever.decoding, sampling, generation, inference, calibration
2604.11666Playing Along: Learning a Double-Agent Defender for Belief Steering via Theory of Mind
PDF
cs.CL, cs.AI, cs.LG82ToM-based 'double-agent' defense task; frontier models struggle—useful for adversarial dialogue evaltheory-of-mind, adversarial-dialogue, privacy, social-engineering, evaluation, defense
2604.11258Dialectic-Med: Mitigating Diagnostic Hallucinations via Counterfactual Adversarial Multi-Agent Debate
PDF
cs.CL82Adversarial multi-agent debate with visual falsification to reduce diagnostic hallucinations.hallucinations, multi-agent, healthcare, multimodal, robustness

AI Paper Insight Brief

2026-04-15

0) Executive takeaways (read this first)

  • Evaluation is shifting from “single-score” to “diagnostic infrastructure”: multiple new benchmarks/harnesses (WebForge, CocoaBench, BTB, PaperScope, EmbodiedGovBench, LifeDialBench, PAC-BENCH, Pando, CodeTracer) emphasize reproducibility, per-dimension breakdowns, and process/trace-level evidence over aggregate accuracy.
  • Multi-turn and cross-trace risk is now a first-class threat model: Salami Slicing shows high-ASR gradual jailbreaks that evade per-turn refusal; Meerkat and Hodoscope show repository-/group-level discovery can surface cheating/exploits and novel misbehaviors with far less human review.
  • Tool-augmented agents have two distinct safety gaps: (i) semantic attacks (indirect prompt injection) where deterministic boundary enforcement (ClawGuard) can cut ASR sharply; (ii) structural failures where models call irrelevant tools due to interface match (SABEval), mitigated by attention-pathway rebalancing.
  • Preference/reward modeling is becoming more listwise, more efficient, and more “calibrated”: single-pass multi-response reward modeling reduces multimodal RM latency/FLOPs while improving ranking and GRPO stability; MISE adds calibration to hindsight process rewards to avoid self-eval bias.
  • Interpretability results are sobering but actionable: Pando finds that when explanations are absent/misleading, gradient/RelP are the only consistent white-box signals for predicting behavior; many popular readouts mostly capture “task representation,” not decision computation.
  • Robustness work is increasingly about “measurement error” and OOD reality checks: TEE shows pipeline design variance (prompt/judge interactions) can dominate and naive CIs under-cover; supervised UQ probes often collapse OOD (especially long-form), with middle-layer + token-averaging helping but not solving.

2) Key themes (clusters)

Theme: Reproducible, diagnostic agent benchmarking (beyond aggregate success)

Theme: Monitoring & auditing for “unknown unknowns” (group/repository-level)

  • Why it matters: Many serious failures are sparse, novel, or distributed—missed by per-turn monitors and category-based judges—so discovery workflows must prioritize human attention efficiently.
  • Representative papers:
  • Common approach:
    • Represent actions/traces in an embedding space, then surface overrepresented clusters (density-diffing across groups; clustering + agentic search).
    • Optimize for review efficiency (density-weighted farthest-point sampling; witness-set localization).
    • Convert discoveries into improved supervised monitors (exemplar augmentation) or benchmark patches (Commit0 vulnerability).
  • Open questions / failure modes:
    • Requires meaningful comparison groups; misbehaviors equally common across groups may not pop (Hodoscope).
    • Vulnerable to adversarial dispersion in embedding space (Meerkat AP drop under embedding-prefix attack).
    • Extremely rare single-run anomalies may evade cluster-based discovery (Hodoscope).

Theme: Multi-turn adversaries & cumulative-risk defenses

Theme: Tool-use reliability: structural bias, standardization, and privacy-aware personalization

Theme: Reward/preference modeling & decoding robustness for safer generation

Theme: Interpretability & evaluation reliability under unfaithful explanations / OOD

3) Technical synthesis

  • Listwise scoring is spreading: YOJO’s cross-entropy over N candidates parallels a broader move away from pairwise-only comparisons (also echoed by trajectory/requirement-level scoring in PAC-BENCH/BTB).
  • “Causality constraints” in evaluation are becoming explicit: LifeDialBench’s online protocol prevents future-context leakage; WebForge validates solvability by replay in Chromium; BTB grades deliverables inside the same environment.
  • Agent safety is moving from content filtering to systems enforcement: ClawGuard’s deterministic pre-invocation checks complement (not replace) judge-based approaches; Context Kubernetes similarly enforces permission/freshness invariants at the orchestration layer.
  • Multi-turn threat models unify several papers: Salami (cumulative intent), TOM-SB (belief steering), PAC-BENCH (early-turn privacy violations), and Meerkat (distributed evidence across traces) all show that turn-local metrics miss key failures.
  • Embedding-space methods are powerful but attackable: Hodoscope/Meerkat rely on clustering/projection; Meerkat demonstrates adversarial dispersion can break detection, suggesting a need for robust grouping or multi-view signals.
  • Interpretability signal that survives unfaithful explanations is narrow: Pando finds gradients/RelP help when verbal rationales are absent/misleading; SABEval similarly uses attention-pathway analysis (CAA) to identify and intervene on a structural shortcut.
  • Calibration is a recurring motif: Atomic+Search gates web retrieval by calibrated uncertainty bands; MISE calibrates self-eval rewards to env success; TEE calibrates evaluation confidence by modeling design variance.
  • Benchmarks increasingly include “anti-cheating” and integrity checks: WebForge adds anti-cheating mechanisms; Meerkat finds real benchmark cheating; BTB uses a verifier with measured agreement to reduce subjective grading drift.
  • Robust decoding is being treated as a safety/quality primitive: Min-k’s temperature-invariant truncation targets semantic collapse at high T with modest overhead, relevant for agent exploration settings.
  • Process-level artifacts are becoming training signals: CodeTracer’s localized evidence enables reflective replay improvements; MISE uses per-step hindsight rewards; ClawGUI uses PRM + GiGPO for step-level credit.

4) Top 5 papers (with “why now”)

1) BankerToolBench: Evaluating AI Agents in End-to-End Investment Banking Workflows

  • Provides a high-fidelity, multi-file workflow benchmark (100 tasks; rubrics ~150 criteria/task) that better matches real delegation stakes.
  • Introduces an agentic verifier (Gandalf) with reported agreement vs humans (accuracy 88.2%, κ=0.76), enabling scalable grading of Excel/PPT/PDF deliverables.
  • Shows frontier models are far from delegation-ready (best Pass@1 reported 16%; passing all critical criteria is rare).
  • Skepticism: benchmark simplifies live banking dynamics and is US-centric; still a proxy for real deal work.

2) The Salami Slicing Threat: Exploiting Cumulative Risks in LLM Systems

  • Formalizes cumulative multi-turn jailbreak risk and proves sub-threshold prompts can accumulate beyond harm thresholds.
  • Demonstrates high ASR across multiple LLMs/benchmarks and extends to multimodal targets (VLMs/diffusion).
  • Proposes Cumulative Query Auditing (CQA) that substantially reduces ASR in experiments.
  • Skepticism: CQA uses an LLM judge in prototype form; production cost/latency and robustness need validation.

3) WebForge: Breaking the Realism-Reproducibility-Scalability Trilemma in Browser Agent Benchmark

  • Automated generation of self-contained static websites with real-web noise + anti-cheating, addressing content drift while staying realistic.
  • 934 validated tasks with a 74.1% pipeline pass rate; validation replays solutions in Chromium to ensure solvability.
  • Per-dimension difficulty reveals capability differences; removing screenshots drops accuracy by ~16 pp.
  • Skepticism: static sites can’t fully capture server-side/multi-user/real-time web semantics.

4) Pando: Do Interpretability Methods Work When Models Won’t Explain Themselves?

  • Cleanly isolates the elicitation confounder by controlling whether models give faithful/no/unfaithful rationales.
  • Large paired study (720 models) finds gradient/RelP are the only consistent white-box gains when explanations are absent/misleading.
  • Variance decomposition shows many readouts track field identity/value rather than decision relevance.
  • Skepticism: planted decision trees in a 2B LoRA setting may not generalize to distributed real-world features.

5) ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection

  • Middleware that enforces deterministic, auditable checks at tool-call boundaries (sanitizer, rule evaluator, skill inspector, approval + logs).
  • Demonstrates large ASR reductions across AgentDojo / SkillInject / MCPSafeBench in a basic-rule configuration.
  • Introduces task-rule induction with user confirmation (Rtask), though not evaluated in the reported experiments.
  • Skepticism: residual failures include content-misleading attacks; published results omit the context-aware rule induction.

5) Practical next steps

  • Add cumulative-intent auditing to your safety stack: prototype a CQA-like check that periodically scores conversation history, not just the last user turn, and measure ASR reduction on multi-turn jailbreak suites.
  • Harden tool boundaries deterministically: implement a ClawGuard-style pre-invocation policy (cmd/file/net) with audit logs; evaluate on indirect prompt injection benchmarks and track residual “harm-in-text” cases separately.
  • Test for structural alignment bias in your tool router: create SABEval-like sibling-tool tests (same schema, different semantics) and measure tool-invocation rate; consider pathway-level interventions or training data that breaks the shortcut.
  • Adopt measurement-error-aware evaluation: run small factorial pilots (≥2–3 prompt variants, multiple judges) and use variance decomposition to decide whether to spend budget on more items vs more judges/prompts (TEE).
  • Move from per-trace to repository/group auditing: deploy Hodoscope/Meerkat-style clustering + prioritized review on your agent logs; explicitly test robustness to embedding-space dispersion attacks.
  • For multimodal RLHF/RLAIF pipelines: try multi-response reward modeling for best-of-N and GRPO-style training; measure both ranking quality and latency/FLOPs savings, and test N>4 scaling if relevant.
  • For long-horizon memory agents: evaluate with a causal online protocol (LifeDialBench-style) to quantify future-context leakage; compare raw-text preservation vs compressed memory and track accuracy decay over time.
  • For interpretability-driven audits: when explanations are unreliable, prioritize gradient/RelP-style signals (per Pando) and validate that they improve held-out behavior prediction under a fixed query budget.

Generated from per-paper analyses; no external browsing.