Daily AI Paper Report (2026-04-29)

Published:

Chinese version: [中文]

Run stats

  • Candidates: 244
  • Selected: 30
  • Deepread completed: 30
  • Window (UTC): 2026-04-27T00:00:00Z → 2026-04-28T00:00:00Z (arxiv_announce, expanded=0)
Show selected papers
arXiv IDTitle / LinksCategoriesScoreWhyTags
2604.24118AgentVisor: Defending LLM Agents Against Prompt Injection via Semantic Virtualization
PDF
cs.CR95Prompt-injection defense for agents via semantic privilege separation and audited tool mediation.agent-security, prompt-injection, tool-use, sandboxing, guardrails
2604.24082Jailbreaking Frontier Foundation Models Through Intention Deception
PDF
cs.CR, cs.AI, cs.CL95Targets jailbreaks via intention deception against frontier safe-completion models; highly safety-relevant.jailbreak, agent-safety, safe-completion, red-teaming, frontier-models
2604.24542Layerwise Convergence Fingerprints for Runtime Misbehavior Detection in Large Language Models
PDF
cs.CR, cs.AI, cs.CL93Runtime monitor for jailbreaks, prompt injection, and backdoors without clean-model assumptions.runtime-monitoring, jailbreaks, prompt-injection, backdoors, llm-security
2604.24657AgentWard: A Lifecycle Security Architecture for Autonomous AI Agents
PDF
cs.CR, cs.AI92Defense-in-depth security architecture for autonomous agents across lifecycle stages and tool execution.agent-security, agents, defense-in-depth, tool-use, runtime-safety
2604.24348OS-SPEAR: A Toolkit for the Safety, Performance,Efficiency, and Robustness Analysis of OS Agents
PDF
cs.CL91Comprehensive OS-agent toolkit spanning safety, performance, efficiency, and robustness evaluation.agents, benchmark, os-agents, safety-evaluation, robustness
2604.24700Green Shielding: A User-Centric Approach Towards Trustworthy AI
PDF
cs.CL, cs.AI91User-centric robustness benchmark for benign prompt variation; strong deployment-safety relevance.llm-safety, evaluation, robustness, medical-ai, red-teaming
2604.24710Case-Specific Rubrics for Clinical AI Evaluation: Methodology, Validation, and LLM-Clinician Agreement Across 823 Encounters
PDF
cs.AI, cs.CL91Clinician-authored rubric framework for scalable clinical AI eval with large real-world validation.evaluation, clinical-ai, llm-as-judge, safety, deployment
2604.24074How Sensitive Are Safety Benchmarks to Judge Configuration Choices?
PDF
cs.CL90Shows safety benchmark scores vary sharply with judge prompts, challenging reliability of current evals.safety-evaluation, llm-as-judge, benchmarking, harmbench, measurement
2604.24618Evaluating whether AI models would sabotage AI safety research
PDF
cs.AI89Direct evaluation of whether frontier models sabotage AI safety research in agentic settings.ai-safety, agent-evals, sabotage, frontier-models, alignment
2604.24594Skill Retrieval Augmentation for Agentic AI
PDF
cs.CL, cs.AI89Scalable skill retrieval for agents tackles context limits; likely reusable agent framework/benchmark.agents, retrieval, tool-use, long-context, agent-architecture
2604.24697Can Current Agents Close the Discovery-to-Application Gap? A Case Study in Minecraft
PDF
cs.AI89Benchmark for agents' discovery-to-application loop; tests generalization beyond memorized solutions.agents, benchmark, reasoning, evaluation, generalization
2604.24005TCOD: Exploring Temporal Curriculum in On-Policy Distillation for Multi-turn Autonomous Agents
PDF
cs.LG, cs.AI88Addresses instability in on-policy distillation for multi-turn agents with curriculum training.agents, distillation, reinforcement-learning, reasoning, training
2604.24198Rewarding the Scientific Process: Process-Level Reward Modeling for Agentic Data Analysis
PDF
cs.CL, cs.AI, cs.CE, cs.LG, cs.MA88Process reward modeling for agentic data analysis with environment-aware verification and silent-error detection.process-reward-model, agents, verification, reasoning, reliability
2604.24473Agentic clinical reasoning over longitudinal myeloma records: a retrospective evaluation against expert consensus
PDF
cs.AI, cs.CL88Large retrospective study of agentic clinical reasoning over long records against expert consensus.agents, clinical-ai, long-context, rag, evaluation
2604.24686Governing What You Cannot Observe: Adaptive Runtime Governance for Autonomous AI Agents
PDF
cs.AI87Adaptive runtime governance framework for autonomous agents with explicit risk-bound decision rules.agent-governance, runtime-safety, risk-estimation, autonomous-agents, monitoring
2604.24021QED: An Open-Source Multi-Agent System for Generating Mathematical Proofs on Open Problems
PDF
cs.AI, math.AP87Open multi-agent proof system with explicit failure modes; useful for agent reliability research.agents, reasoning, evaluation, multi-agent, math
2604.24395Aligning with Your Own Voice: Self-Corrected Preference Learning for Hallucination Mitigation in LVLMs
PDF
cs.AI86Self-corrected preference learning for LVLM hallucination mitigation with concrete alignment angle.alignment, hallucination, vlm, dpo, reliability
2604.24197Seeing Is No Longer Believing: Frontier Image Generation Models, Synthetic Visual Evidence, and Real-World Risk
PDF
cs.CL, cs.AI86Timely risk analysis of frontier image models and synthetic visual evidence with policy relevance.ai-safety, multimodal, misinformation, risk-analysis, frontier-models
2604.24162Defusing the Trigger: Plug-and-Play Defense for Backdoored LLMs via Tail-Risk Intrinsic Geometric Smoothing
PDF
cs.CR, cs.AI85Plug-and-play inference-time defense for backdoored LLMs with no retraining or clean data.backdoor-defense, llm-security, inference-time, robustness, attention
2604.24184Dynamic Cyber Ranges
PDF
cs.CR84Dynamic cyber ranges with defender agents offer stronger evaluation for offensive agent capabilities.cybersecurity, agent-evaluation, red-teaming, defender-agents, benchmarks
2604.24698The Chameleon's Limit: Investigating Persona Collapse and Homogenization in Large Language Models
PDF
cs.CL84Identifies persona collapse in LLM populations; important for multi-agent realism and risk analysis.multi-agent, evaluation, llm-behavior, robustness, social-simulation
2604.24302Differentiable Faithfulness Alignment for Cross-Model Circuit Transfer
PDF
cs.CL84Mechanistic interpretability method for transferring circuits across models could improve scalable auditing.mechanistic-interpretability, circuits, model-auditing, transfer, reliability
2604.24039AgenticCache: Cache-Driven Asynchronous Planning for Embodied AI Agents
PDF
cs.LG, cs.AI, cs.CL84Improves embodied agent efficiency via cache-based planning with strong latency and token reductions.agents, efficiency, planning, embodied-ai, llm-systems
2604.24623XGRAG: A Graph-Native Framework for Explaining KG-based Retrieval-Augmented Generation
PDF
cs.AI, cs.IR, cs.LG82Explainability for GraphRAG via causal graph perturbations improves transparency and trust.rag, graphrag, interpretability, grounding, explainability
2604.24178Meta-Aligner: Bidirectional Preference-Policy Optimization for Multi-Objective LLMs Alignment
PDF
cs.LG, cs.AI82Adaptive multi-objective alignment via meta-learning addresses conflicting values in LLM optimization.alignment, multi-objective, preference-optimization, meta-learning, llms
2604.23954An empirical evaluation of the risks of AI model updates using clinical data: stability, arbitrariness, and fairness
PDF
cs.AI82Empirical study of model update risks in clinical AI covering stability, arbitrariness, and fairness.safety, fairness, evaluation, clinical-ai, model-updates
2604.24477GAMMAF: A Common Framework for Graph-Based Anomaly Monitoring Benchmarking in LLM Multi-Agent Systems
PDF
cs.CR, cs.AI80Open benchmarking framework for anomaly monitoring in LLM multi-agent systems under attacks.multi-agent, benchmark, anomaly-detection, security, evaluation
2604.24564MEG-RAG: Quantifying Multi-modal Evidence Grounding for Evidence Selection in RAG
PDF
cs.CL, cs.IR, cs.IT80Metric for multimodal evidence grounding in RAG targets hallucination and evidence quality.rag, multimodal, evaluation, grounding, hallucination
2604.24222MEMCoder: Multi-dimensional Evolving Memory for Private-Library-Oriented Code Generation
PDF
cs.SE, cs.AI, cs.CL80Memory framework for private-library codegen addresses enterprise RAG gaps with evolving usage guidance.code-llm, rag, memory, enterprise-ai, agents
2604.24038AgentPulse: A Continuous Multi-Signal Framework for Evaluating AI Agents in Deployment
PDF
cs.AI, cs.CL, cs.SE79Continuous deployment-time evaluation for AI agents using multi-signal metrics beyond static benchmarks.agent-evaluation, deployment, monitoring, benchmarks, ecosystem

AI Paper Insight Brief

2026-04-29

0) Executive takeaways (read this first)

  • Agent work is shifting from single-score capability gains to runtime control and deployment realism: several papers focus on continuous evaluation, lifecycle defenses, runtime monitors, and dynamic benchmarks rather than static task success alone.
  • A recurring pattern is structured mediation beats naive scaling: temporal curricula for distillation, semantic hypervisors for tool use, process reward models for data analysis, and skill/memory scaffolds all outperform simpler “just give the model more context” baselines.
  • Evaluation itself is under attack from hidden variance: judge prompt wording can swing safety scores by up to 24.2 points, deployment-aware rankings diverge sharply from benchmark-only rankings, and persona/population fidelity can fail even when per-instance metrics look good.
  • Security work is increasingly targeting indirect and lifecycle-spanning failures: para-jailbreaking, prompt injection through external content, backdoored weights, and multi-agent infection all require defenses that monitor internal state or mediate actions across stages.
  • In high-stakes domains, the strongest results come from tool-using, structured systems with explicit verification, but residual errors remain more consequential than aggregate metrics suggest—especially in clinical and safety-critical settings.
  • For frontier progress, the practical bottlenecks are less about raw model competence and more about incorporation, stability, grounding, and governance: retrieving skills/evidence is not enough unless the agent knows when and how to use them safely.

2) Key themes (clusters)

Theme: Runtime governance and defense-in-depth for agents

Theme: Agent evaluation is becoming deployment-aware and harder to trust

Theme: Structured scaffolding beats naive context stuffing in long-horizon agents

Theme: Grounding, verification, and evidence selection are moving upstream

Theme: Security research is targeting indirect, adaptive, and multi-agent attack surfaces

Theme: High-stakes domains are exposing the limits of aggregate metrics

3) Technical synthesis

  • Several papers converge on a monitor-then-intervene pattern: LCF monitors hidden-state deltas before generation, AgentVisor audits proposed tool calls, TIGS screens attention collapse before smoothing, and clinical abstention methods defer on out-of-distribution cases.
  • Structured intermediate representations are a recurring enabler: YAML proof DAGs in QED, semantic exceptions in AgentVisor, task/API guideline memories in MEMCoder, structured memory in clinical agents, and claim-proof-constraints-example summaries in SCICRAFTER.
  • A common failure mode across agent papers is retrieval/incorporation mismatch: retrieving the right skill, evidence, or document is often easier than getting the model to use it correctly.
  • Multiple works replace binary correctness with graded process signals: DataPRM’s ternary rewards, clinical rubric weighting, and multi-factor deployment scores all capture recoverable vs irrecoverable errors better than pass/fail metrics.
  • Curriculum and pacing appear as a general stabilization tool: TCOD controls rollout horizon during distillation; discovery agents improve with staged hints/scientist scaffolds; memory systems evolve guidelines over time rather than injecting everything at once.
  • Security defenses increasingly rely on internal geometry or topology, not just text classification: attention collapse, layerwise convergence fingerprints, graph anomaly monitoring, and graph-native perturbation explanations.
  • Several papers show benchmark ceilings can be misleading: iterative RAG and full-context converge in longitudinal clinical reasoning; benchmark-only rankings diverge from deployment-aware rankings; per-persona fidelity hides population collapse.
  • There is a growing split between architectural papers with strong conceptual framing but weak quantitative validation and benchmark-heavy empirical papers with narrower scope; combining both remains rare.
  • In multimodal settings, the strongest gains come from evidence contribution modeling rather than raw relevance, whether for reranking (MEG-RAG) or hallucination correction (AVES-DPO).
  • Across domains, utility-preserving defense is the differentiator: one-shot self-correction, asynchronous cache updates, process rewards for exploration, and selective skill loading all try to avoid the usual safety-vs-performance collapse.

4) Top 5 papers (with “why now”)

  • AgentVisor: Defending LLM Agents Against Prompt Injection via Semantic Virtualization
    • Reframes agent security as privilege separation: an untrusted Guest proposes actions, a trusted Visor audits them via Suitability, Taint, and Integrity checks.
    • Achieves near-zero ASR on evaluated direct and indirect prompt-injection benchmarks while preserving substantial utility under attack.
    • The one-shot semantic-exception recovery path is practically useful because it avoids the utility collapse of block-only defenses.
    • Why now: prompt injection is moving from toy demos to real tool-using agents, and this is one of the clearest deployable architectures for mediation.
    • Skepticism: adds latency, focuses on text settings, and long-context/multimodal scaling is still unresolved.
  • Rewarding the Scientific Process: Process-Level Reward Modeling for Agentic Data Analysis
    • Identifies two concrete PRM failure modes in data-analysis agents: silent semantic errors and over-penalized exploratory grounding steps.
    • DataPRM uses environment-aware ReAct verification, tool calls, and ternary rewards to improve both test-time scaling and RL training.
    • A 4B verifier outperforming larger PRM baselines is especially relevant for practical agent stacks.
    • Why now: agentic scientific/data-analysis workflows are proliferating, and process supervision is becoming more important than final-answer scoring.
    • Skepticism: scope is still mostly reasoning/visualization tasks, and the verifier pipeline adds compute and annotation overhead.
  • Jailbreaking Frontier Foundation Models Through Intention Deception
    • Introduces para-jailbreaking: models can refuse direct harmful requests yet still leak harmful alternative content under a benign-seeming narrative.
    • iDecep shows strong multi-turn attack success against frontier systems, including multimodal amplification with benign images.
    • The paper matters because it targets the newer safe-completion regime rather than older refusal-only defenses.
    • Why now: as labs shift to “helpful but safe” completions, indirect leakage becomes a more realistic failure mode than blunt refusal bypasses.
    • Skepticism: black-box experiments are limited in scope, and exact attack tooling is withheld, making replication and defense benchmarking harder.
  • Agentic clinical reasoning over longitudinal myeloma records: a retrospective evaluation against expert consensus
    • Shows a structured agentic system can beat both iterative RAG and full-context baselines on complex longitudinal clinical reasoning.
    • Gains are largest on the hardest questions and longest records, where current non-agentic methods appear to hit a ceiling.
    • The ablation suggests the skill library, not just tool access, is the main driver of improvement.
    • Why now: this is a concrete signal that agentic structure may finally outperform brute-force retrieval/context expansion in a real high-stakes domain.
    • Skepticism: retrospective, institution-specific, and residual system errors are more often clinically significant than expert disagreements.
  • How Sensitive Are Safety Benchmarks to Judge Configuration Choices?
    • Quantifies a major but under-discussed source of benchmark instability: judge prompt wording alone shifts harmful-rate estimates by up to 24.2 points.
    • Shows even surface rewording within the same prompt condition can cause large swings and ranking reversals.
    • Provides a direct methodological warning for anyone using LLM-as-judge safety scores in model comparison or governance.
    • Why now: safety benchmarking is increasingly used for deployment and policy decisions, but many reported deltas may be smaller than judge-induced variance.
    • Skepticism: primary analysis is centered on one judge model and one benchmark, without a human accuracy anchor.

5) Practical next steps

  • Run multi-prompt judge audits for any internal safety benchmark; report ranges and ranking stability, not just a single harmfulness number.
  • Add a runtime mediation layer for tool-using agents: at minimum, audit tool suitability, goal alignment, and argument integrity before execution.
  • Instrument agents with prefill/runtime anomaly signals where possible—hidden-state or action-sequence monitors can catch failures that output filters miss.
  • For long-horizon agents, test curriculum exposure and process-level rewards before scaling context or model size; many failures are sequencing failures.
  • Separate your agent stack into retrieval, incorporation, and application metrics. If performance is flat, check whether the model is actually using retrieved skills/evidence.
  • In RAG and multimodal systems, rerank by marginal evidence contribution rather than semantic similarity alone; relevance without contribution is a common hallucination source.
  • In high-stakes deployments, track stability, subgroup effects, and abstention distribution across updates, not just aggregate accuracy.
  • For evaluation of synthetic users or multi-agent populations, add population-level geometry checks (coverage, uniformity, complexity) to catch homogenization hidden by per-instance fidelity.
  • If you deploy autonomous agents in adversarial settings, benchmark them in dynamic environments with active defenders or topology updates, not only static tasks.

Generated from per-paper analyses; no external browsing.