Daily AI Paper Report (2026-03-26)

Published:

Chinese version: [中文]

Run stats

  • Candidates: 232
  • Selected: 30
  • Deepread completed: 30
  • Window (UTC): 2026-03-24T00:00:00Z → 2026-03-25T00:00:00Z (arxiv_announce, expanded=0)
Show selected papers
arXiv IDTitle / LinksCategoriesScoreWhyTags
2603.22928SoK: The Attack Surface of Agentic AI -- Tools, and Autonomy
PDF
cs.CR96SoK mapping agentic AI attack surface (RAG/tools/multi-agent); strong taxonomy + synthesis for defendersagentic-ai, security, prompt-injection, tool-security, rag-poisoning, multi-agent, survey
2603.23064Mind Your HEARTBEAT! Claw Background Execution Inherently Enables Silent Memory Pollution
PDF
cs.CR, cs.AI, cs.SI95Finds agent memory pollution via background “heartbeat”; concrete vuln model for personal agentsagent-security, memory-poisoning, prompt-injection, tool-agents, provenance, background-execution
2603.22853Agent Audit: A Security Analysis System for LLM Agent Applications
PDF
cs.CR, cs.AI93Practical static/security analysis for LLM agent apps (code+configs+creds+privileges); deployable outputsagents, appsec, static-analysis, mcp, credentials, tooling, deployment-security
2603.23171Robust Safety Monitoring of Language Models via Activation Watermarking
PDF
cs.CR, cs.AI, cs.CY, cs.LG92Frames robust LLM monitoring as a security game; targets adaptive evasion with activation watermarkingmonitoring, misuse-detection, adaptive-adversary, watermarking, inference-security, safety
2603.23269Not All Tokens Are Created Equal: Query-Efficient Jailbreak Fuzzing for LLMs
PDF
cs.CR, cs.AI, cs.LG92Query-efficient jailbreak fuzzing via token-importance; enables stronger red-teaming under budgetsjailbreaks, fuzzing, adversarial-prompts, red-teaming, surrogate-models, evaluation
2603.23184ImplicitRM: Unbiased Reward Modeling from Implicit Preference Data for LLM alignment
PDF
cs.CL, cs.AI, stat.AP92Unbiased reward modeling from implicit feedback; tackles bias + missing negatives for RLHF at scalealignment, RLHF, reward-modeling, implicit-feedback, debiasing, preference-learning
2603.22934ProGRank: Probe-Gradient Reranking to Defend Dense-Retriever RAG from Corpus Poisoning
PDF
cs.AI90Training-free retriever-side defense for RAG corpus poisoning using probe gradients + perturbation testsrag, retrieval-security, data-poisoning, dense-retriever, defense, robustness
2603.22767Can LLM Agents Generate Real-World Evidence? Evaluating Observational Studies in Medical Databases
PDF
cs.AI, cs.CL90RWE-bench tests long-horizon agents executing real DB observational studies; structured evidence evalagent-evaluation, benchmarks, tool-use, long-horizon, databases, healthcare
2603.22829Improving Safety Alignment via Balanced Direct Preference Optimization
PDF
cs.AI90Targets safety-alignment overfitting in DPO; proposes balanced objective to improve robustness.alignment, DPO, RLHF, safety, overfitting, preference-learning
2603.23268SafeSeek: Universal Attribution of Safety Circuits in Language Models
PDF
cs.LG, cs.AI88Optimization-based attribution of safety circuits; aims for generalizable mechanistic safety interpretabilitymechanistic-interpretability, safety-circuits, jailbreaks, backdoors, attribution, sparse-masks
2603.22744Beyond Binary Correctness: Scaling Evaluation of Long-Horizon Agents on Subjective Enterprise Tasks
PDF
cs.AI88LH-Bench evaluates subjective enterprise long-horizon workflows with rubrics + artifact-based signalsagent-evaluation, benchmarks, enterprise, rubrics, long-horizon, LLM-judges
2603.23047Parametric Knowledge and Retrieval Behavior in RAG Fine-Tuning for Electronic Design Automation
PDF
cs.CL, cs.AI, cs.CE88RAG fine-tuning analysis + new factual attribution eval (TriFEX) and parametric-knowledge metric (PKP).RAG, evaluation, factuality, attribution, metrics, fine-tuning
2603.23114Between Rules and Reality: On the Context Sensitivity of LLM Moral Judgment
PDF
cs.AI, cs.CL, cs.CY, cs.HC88Contextual moral dilemmas dataset; shows LLM moral sensitivity differs from humans; control problemsafety, alignment, evaluation, moral-judgment, dataset, context-sensitivity, human-comparison
2603.22868Agent-Sentry: Bounding LLM Agents via Execution Provenance
PDF
cs.CR, cs.AI87Execution provenance to bound/validate agent behavior vs irrelevant/compromised actions; security+privacy angleagents, provenance, runtime-verification, policy-bounding, auditability, security
2603.22882TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Exploration
PDF
cs.LG, cs.CV86Autonomous red-teaming for VLMs via hierarchical strategy exploration; seeks novel/diverse exploitsred-teaming, vlm-safety, adversarial-testing, automation, evaluation, attack-discovery
2603.23355Off-Policy Value-Based Reinforcement Learning for Large Language Models
PDF
cs.LG, cs.CL86Off-policy value-based RL for LLMs with replay; could improve sample efficiency for reasoning RLRL-for-LLMs, off-policy, value-learning, reasoning, verification-signals, sample-efficiency
2603.22751CIPL: A Target-Independent Framework for Channel-Inversion Privacy Leakage in Agents
PDF
cs.CR85General framework for privacy leakage in agents as channel inversion; broadens beyond memory leakageprivacy, agents, information-leakage, attack-framework, side-channels, threat-modeling
2603.23117TRAP: Hijacking VLA CoT-Reasoning via Adversarial Patches
PDF
cs.CR84Shows CoT in VLA robots enables targeted control hijacking via adversarial patches; important embodied riskrobotics, vla, chain-of-thought, adversarial-patches, embodied-security, attack
2603.22717Does Teaming-Up LLMs Improve Secure Code Generation? A Comprehensive Evaluation with Multi-LLMSecCodeEval
PDF
cs.CR, cs.SE84Evaluates multi-LLM collaboration + static analysis for secure codegen; practical security pipeline datasecure-code-generation, LLM-ensembles, static-analysis, software-security, evaluation
2603.23292LLM Olympiad: Why Model Evaluation Needs a Sealed Exam
PDF
cs.AI, cs.CL84Proposes sealed-exam 'LLM Olympiad' to reduce benchmark leakage/chasing and improve trust in evals.evaluation, benchmarks, data-contamination, leaderboards, reproducibility, governance
2603.23485Failure of contextual invariance in gender inference with large language models
PDF
cs.CL, cs.AI, cs.CY84Finds large instability under contextually equivalent prompts in gender inference; evaluation warningreliability, robustness, bias, evaluation, prompt-sensitivity, context-invariance
2603.23501MedObvious: Exposing the Medical Moravec's Paradox in VLMs via Clinical Triage
PDF
cs.CV, cs.AI, cs.CL83Benchmark for medical VLM input-validity sanity checks; targets a safety-critical failure modeevaluation, medical-ai, vlm, robustness, input-validation, benchmark
2603.22714PopResume: Causal Fairness Evaluation of LLM/VLM Resume Screeners with Population-Representative Dataset
PDF
cs.CY, cs.AI83PopResume dataset enables causal/path-specific fairness audits for LLM/VLM resume screeners at scalefairness, auditing, datasets, causal-evaluation, hiring, VLM
2603.22812Efficient Hallucination Detection: Adaptive Bayesian Estimation of Semantic Entropy with Guided Semantic Exploration
PDF
cs.CL83Adaptive semantic-entropy hallucination detection cuts sampling cost by adjusting budget to uncertainty.hallucination, uncertainty, semantic-entropy, Bayesian, efficient-eval, reliability
2603.23483SpecEyes: Accelerating Agentic Multimodal LLMs via Speculative Perception and Planning
PDF
cs.CV, cs.CL82Speculative planning to cut agentic multimodal tool-loop latency; system-level speedups for MLLM agentsagentic-MLLM, speculative-decoding, tool-use, latency, planning, efficiency
2603.22754PRISM: A Dual View of LLM Reasoning through Semantic Flow and Latent Computation
PDF
cs.CL82Joint step+layer reasoning diagnostics; identifies failure modes like verification loops/divergenceinterpretability, reasoning, analysis, hidden-states, failure-modes, diagnostics
2603.22823Empirical Comparison of Agent Communication Protocols for Task Orchestration
PDF
cs.AI81Benchmark comparing tool-only vs delegation vs hybrid multi-agent protocols; useful for agent design.agents, multi-agent, tool-use, orchestration, benchmarks, protocols
2603.23013Knowledge Access Beats Model Size: Memory Augmented Routing for Persistent AI Agents
PDF
cs.CL81Memory-augmented routing for persistent agents; big cost cuts without training; strong deployment angleagents, memory, efficiency, routing, long-term-interaction, serving
2603.23149Describe-Then-Act: Proactive Agent Steering via Distilled Language-Action World Models
PDF
cs.AI80Fast 'describe-then-act' steering layer predicts outcomes from latent+actions; aims at proactive safety.agents, world-models, steering, safety, latents, planning
2603.22879Confidence Calibration under Ambiguous Ground Truth
PDF
cs.LG, cs.AI79Calibration breaks with annotator disagreement; proposes ambiguity-aware post-hoc calibratorscalibration, uncertainty, evaluation, ambiguous-labels, reliability, post-hoc

AI Paper Insight Brief

2026-03-26

0) Executive takeaways (read this first)

  • Agent security is shifting from “prompt injection” to “system surfaces”: multiple papers show the dominant risks are in channels (tool args/returns, traces), architectures (heartbeat background execution), and runtime behavior (execution provenance), not just final text.
  • Practical defenses are becoming more “systems-y” and measurable: retriever-side reranking against RAG poisoning (no generator calls), runtime tool-call bounding via provenance graphs, and agent-aware static analysis for MCP configs all report strong security/utility trade-offs.
  • Evaluation is moving beyond single-number correctness: long-horizon subjective enterprise tasks (rubric + artifact contracts + human validation), end-to-end medical observational studies on a real DB backend, and medical VLM “input sanity checks” expose failures that classic benchmarks miss.
  • Alignment training and monitoring are getting more “distribution-aware”: B-DPO targets preference-pair comprehension imbalance; ImplicitRM makes implicit-feedback reward modeling unbiased under missing negatives and propensity bias; activation watermarking adds keyed, adversary-aware monitoring.
  • Context sensitivity is a recurring failure mode: minimal context changes can flip gender pronoun inference; moral judgments shift with small contextual cues and differ from humans; these effects are controllable (activation steering) but not free (small capability drops).
  • Cost/latency optimizations increasingly rely on gating + verification: speculative “tool-free” bypass for agentic MLLMs and memory-augmented routing show large speedups/cost cuts, but hinge on confidence/calibration and retrieval fidelity.

2) Key themes (clusters)

Theme: Agent security across channels, tools, and autonomy

Theme: RAG and memory: robustness, poisoning, and “knowledge vs model size”

Theme: Evaluation for long-horizon, subjective, and high-stakes workflows

Theme: Alignment & monitoring under distribution shift and adversaries

Theme: Context sensitivity in fairness and moral judgment (and controllability)

3) Technical synthesis

  • Many works converge on “gating + fallback” architectures: SpecEyes gates tool-free answers via separability; memory routing gates escalation via logprob confidence; Agent-Sentry gates tool calls via provenance graphs + intent judge; hallucination detection gates sampling via entropy-variance stopping.
  • Judge dependence is everywhere, but used differently: LH-Bench uses multiple judges + human validation; RWE-bench uses gated questions and a cohort judge; TreeTeaming and MedObvious highlight format/judge sensitivity risks; TRIFEX uses LLM attribution with measured accuracy (~80%).
  • Security measurement is becoming rate-based and lifecycle-aware: secure-code rate across gen/detect/patch; ASR vs utility; CER/AER for leakage; poison hit/recall at retrieval stage plus downstream ASR.
  • Training-free defenses are favored for deployability: ProGRank reranks without retraining; Agent Audit is static; SpecEyes is routing; memory routing is retrieval + confidence; these contrast with fine-tuning-based monitoring (activation watermarking) and alignment (B-DPO).
  • Causal/structural decompositions are spreading: PopResume decomposes protected-attribute effects into direct vs mediated (business necessity vs redlining); ImplicitRM decomposes implicit feedback into preference vs action propensity; calibration paper decomposes “true-label” vs voted-label calibration targets.
  • Sparse structure keeps appearing: TriageFuzz finds refusal dominated by sparse token regions; SafeSeek finds extremely sparse safety/backdoor circuits; both imply defenses/attacks can focus on small substructures.
  • Context and modality increase direct access to sensitive attributes: PopResume shows photos increase direct effects (NDE) in VLM screeners; TRAP shows CoT can dominate action generation and be hijacked via visual patches.
  • Cost is now a first-class metric: multi-LLM secure coding quantifies CodeQL runtime dominance; MCP vs A2A shows token bloat crossover; SpecEyes formalizes throughput speedup; memory routing reports ~96% effective-cost reduction vs large model.
  • Reliability hinges on intermediate artifacts: cohort audit tables, manifests/screenshots, tool-call provenance, and triple attributions are increasingly used as “contracts” for evaluation and debugging.

4) Top 5 papers (with “why now”)

1) Agent-Sentry: Bounding LLM Agents via Execution Provenance

  • Introduces functionality graphs from traces (benign/adversarial/ambiguous) and runtime interception of tool calls.
  • Uses an intent-alignment judge restricted to trusted inputs only (prompt + tool specs + tool history), explicitly excluding retrieved content.
  • Reports strong security/utility trade-offs on a new 6,733-trace benchmark (utility 94.61%, ASR 9.46% with full coverage).
  • Skepticism: coverage dependence and mimicry attacks (benign paths with malicious parameters) can evade.

2) ProGRank: Probe-Gradient Reranking to Defend Dense-Retriever RAG from Corpus Poisoning

  • Training-free, retriever-side defense using probe-gradient instability under perturbations + score gating.
  • Reduces poisoned Top-K exposure and reports strong downstream robustness (macro-average judge-based ASR reported as 0.000 at Top-5 in their eval).
  • Much faster than costly baselines (mean 4.73 s/query vs 118.17 s/query for RAGuard).
  • Skepticism: compute overhead depends on probing repeats/candidate buffer; clean utility trade-offs are dataset-dependent.

3) Agent Audit: A Security Analysis System for LLM Agent Applications

  • Agent-aware static analysis for Python agents + MCP config semantics, with tool-boundary tainting and confidence tiers.
  • New benchmark AVB (22 samples, 42 vulns); reports 95.24% recall (40/42) vs Semgrep/Bandit ~24–30% recall.
  • CI/IDE-ready outputs (SARIF) and sub-second scanning on 22k LOC.
  • Skepticism: intra-procedural taint only; MCP heuristics contribute false positives; limited JS/TS support.

4) PopResume: Causal Fairness Evaluation of LLM/VLM Resume Screeners with Population-Representative Dataset

  • Provides a population-grounded resume dataset (60,884) and path-specific causal decomposition: direct vs mediated, and mediated into business-necessity (BIE) vs redlining (RIE).
  • Finds discrimination patterns masked by outcome-only metrics (e.g., cancellation where TE≈0 but NDE/NIE nonzero; mixed mediation in 53/120 cases).
  • Shows adding photos can increase direct discrimination magnitude in VLMs (NDE increases in 8/20 paired cases).
  • Skepticism: synthetic rendering + U.S.-specific population assumptions; mediator grouping (B vs R) is context/jurisdiction dependent.

5) Robust Safety Monitoring of Language Models via Activation Watermarking

  • Keyed internal monitoring: fine-tune harmful activations to align with secret directions; detect via cosine similarity at inference.
  • Reports higher AUROC across jailbreak families (e.g., AutoDAN AUROC 0.9048) and improved low-FPR operation; adds a “secret extraction” attribution game (~80% diagonal).
  • Low inference overhead (projection) vs extra forward passes for guard models.
  • Skepticism: assumes black-box attackers; no provable guarantees; some utility drops (notably GSM8K −7.13 pp).

5) Practical next steps

  • Instrument your agent for channel-level leakage: enumerate Obs channels (final text, tool args/returns, traces) and measure leakage with CIPL-style metrics (AER/CER) rather than only output redaction.
  • Add pre-deployment agent-aware static checks: scan for tool-boundary taint flows, prompt construction risks, and MCP over-privilege/unverified servers (Agent Audit-style SARIF in CI).
  • Deploy runtime tool-call bounding: log execution provenance and enforce allow/block based on learned benign/adversarial paths; route ambiguous calls to an intent judge that excludes untrusted retrieved content (Agent-Sentry pattern).
  • Harden RAG against poisoning at retrieval time: try retriever-side reranking/penalties on top-B candidates; track poison hit/recall and downstream ASR, plus clean EM trade-offs (ProGRank-style evaluation).
  • Treat memory as a security boundary: separate background “heartbeat” context from user-facing context; require provenance + explicit user visibility before promoting to long-term memory (HEARTBEAT E→M→B).
  • Upgrade fairness audits from outcomes to mechanisms: for high-stakes scoring (hiring), estimate path-specific effects (direct vs mediated; business necessity vs proxy/redlining) and test photo-induced direct effects (PopResume).
  • Stress-test context sensitivity: add “contextually irrelevant” primes to fairness and safety probes; measure invariance failures (gender inference) and contextual shifts (moral judgment) before deployment.
  • If using preference optimization or implicit feedback: check for preference-pair comprehension imbalance (B-DPO idea) and propensity bias / missing negatives (ImplicitRM) before trusting reward models.
  • Adopt long-horizon evaluation contracts: require intermediate artifacts (manifests, cohort audit tables, screenshots) and rubric-based scoring with human validation for subjective enterprise tasks (LH-Bench/RWE-bench patterns).

Generated from per-paper analyses; no external browsing.