AdaptVision: Efficient Vision-Language Models via Adaptive Visual Acquisition

Tencent AI Lab  
*Equal Contribution

AdaptVision is an open-source model that leverages agentic visual tool use for dynamic visual token reduction, achieving a sota-level accuracy-efficiency trade-off across multiple VQA benchmarks.

Abstract

Vision-Language Models (VLMs) have achieved remarkable success in visual question answering tasks, but their reliance on large numbers of visual tokens introduces significant computational overhead. While existing efficient VLM approaches reduce visual tokens through fixed-ratio compression, they operate passively and lack the ability to adapt to varying task requirements. This motivates a fundamental question: Can VLMs autonomously determine the minimum number of visual tokens required for each sample? Inspired by human active vision mechanisms, we introduce AdaptVision, an efficient VLM paradigm that enables adaptive visual token acquisition through a coarse-to-fine approach. Our model initially processes compressed visual tokens from low-resolution images and selectively acquires additional visual information by invoking a bounding box tool to crop key regions when necessary. We train AdaptVision using a reinforcement learning framework that carefully balances accuracy and efficiency. Central to our approach is Decoupled Turn Policy Optimization (DTPO), which decouples the learning objective into two components: (1) tool learning, which optimizes correct tool utilization, and (2) accuracy improvement, which refines the generated responses to improve answer correctness. Based on this formulation, we further decouple advantage estimation by computing separate advantages for tokens associated with each objective. This formulation enables more effective optimization for AdaptVision compared to vanilla GRPO. Comprehensive experiments across multiple VQA benchmarks demonstrate that AdaptVision achieves superior performance while consuming substantially fewer visual tokens than state-of-the-art efficient VLM methods.

  1. Synergizing Visual Reasoning and Visual Token Compression. We introduce AdaptVision, a VLM framework that leverages visual tool use for dynamic token reduction.
  2. Efficient Algorithm. We propose a Decoupled Turn Policy Optimization (DTPO) algorithm alongside a tailored reward function to enable the effective training of AdaptVision.
  3. Performance. Extensive evaluation on multiple VQA benchmarks shows that AdaptVision achieves superior performance with substantially reduced visual token consumption compared to existing efficient VLM methods.
  4. Open-source. All code, models, and training recipes are available to facilitate reproducibility and further research.

Learning Framework for AdaptVision

  • Framework Overview
    AdaptVision first processes a 1/4-resolution image. The model then decides whether to answer directly or invoke the bounding box tool to crop a high-resolution region for further analysis before generating the final answer.

  • Reward Design
      1. Outcome Reward. The outcome reward provides a sequence-level feedback. 1) Accuracy reward uses an external LLM to judge the answer correctness. 2) Format reward enforces the instruction-following capability. 3) Balance reward prevents over reliance on tool calls. \begin{equation} \begin{split} \mathcal{R}_{oc} = \mathcal{R}_{acc} + \mathcal{R}_{format} + \mathcal{R}_{balance} \end{split} \end{equation} 2. Tool Reward. The tool reward measures the tool-use proficiency. It rewards correctly cropped regions and penalizes excessive cropping. \begin{equation} \begin{split} \mathcal{R}_{tool} = \mathcal{R}_{crop} - \alpha \cdot \mathcal{R}_{area} \end{split} \end{equation}
  • Decoupled Turn Policy Optimization (DTPO)
      1. Balanced Optimization Objective: DTPO decouples the policy loss by turns and normalize the contributions of tool and answer tokens separately. This adjustment effectively resolves the under-optimization problem of tool tokens. \begin{equation} \mathcal{J}_{\text{GRPO}}(\theta) = \mathbb{E}_{x, o_i} \Bigg[ \frac{1}{G} \sum_{t=1}^{G} \frac{1}{N_i} \sum_{t=1}^{N_i} \mathcal{L}_{i,t}(\theta) \Bigg] = \mathbb{E}_{x, o_i} \Bigg[ \underbrace{\frac{1}{G} \sum_{t=1}^{G} \frac{1}{N_i} \sum_{t=1}^{T_i} \mathcal{L}_{i,t}(\theta) }_{\textup{Tool Token}} + \underbrace{\frac{1}{G} \sum_{t=1}^{G} \frac{1}{N_i} \sum_{t=T_i+1}^{N_i} \mathcal{L}_{i,t}(\theta)}_{\textup{Answer Token}} \Bigg]. \end{equation} \begin{equation} \mathcal{J}_{\text{DTPO}}(\theta) = \mathbb{E}_{x, o_i} \Bigg[ \underbrace{\frac{1}{\sum_{i=1}^G T_i} \sum_{i=1}^G \sum_{t=1}^{T_i} \mathcal{L}_{i,t}(\theta)}_{\textup{Tool Token}} + \underbrace{\frac{1}{\sum_{i=1}^G (N_i - T_i)} \sum_{i=1}^G \sum_{t=T_i+1}^{N_i} \mathcal{L}_{i,t}(\theta) }_{\textup{Answer Token}} \Bigg]. \end{equation}
      2. Precise Credit Assignment: DTPO decouples the advantage estimation by computing distinct advantages for tool and answer tokens, rather than using a single advantage for the entire sequence. \begin{equation} A_{i,t}^{\text{GRPO}} = \frac{R_i - \text{mean}(\{R_i\}^G_{i=1})}{\text{std}(\{R_i\}^G_{i=1})} . \end{equation} \begin{gather} A_{i,t}^{\text{DTPO}} = \begin{cases} A_{oc}^{(i)} + \lambda \cdot A_{tool}^{(i)}, & \textup{direct answer}, \\ A_{oc}^{(i)} + \lambda \cdot A_{tool}^{(i)} \cdot \mathbb{I}(1 \le t \le T_{i}) , & \textup{tool call}, \end{cases} \\ A_{tool}^{(i)} = \frac{\mathcal{R}_{tool}^{(i)} - \text{mean}(\{\mathcal{R}_{tool}^{(i)}\}^G_{i=1})}{\text{std}(\{\mathcal{R}_{tool}^{(i)}\}^G_{i=1})}, \quad \quad A_{oc}^{(i)} = \frac{\mathcal{R}_{oc}^{(i)} - \text{mean}(\{\mathcal{R}_{oc}^{(i)}\}^G_{i=1})}{\text{std}(\{\mathcal{R}_{oc}^{(i)}\}^G_{i=1})} . \end{gather}


Demo

  • Tool-call Response. The down-sample model, while reducing visual token usage, fails to answer correctly due to insufficient information in the low-resolution image. The vanilla model, using the original high-resolution image, yields a correct answer but at the cost of a large number of visual tokens. In contrast, AdaptVision begins with the low-resolution image, analyzes the question and image, recognizes the informational inadequacy, and then intelligently invokes the tool to crop the most relevant region from the high-resolution image. By acquiring only this essential additional visual information, it produces an accurate answer while minimizing visual token consumption.



  • Direct Answer. In scenarios where a low-resolution image provides enough information, AdaptVision correctly chooses to answer directly—matching the behavior of the Qwen2.5-VL Down-sample model.

  • Performance

    AdaptVision achieves superior performance with substantially reduced visual token consumption compared to existing efficient VLM methods

    *Vanilla denotes the Qwen2.5-VL-7B-Instruct model.
    *Down-Sample uses a 1/4-resolution image as input to the Vanilla model.

    Acknowledgement

    This website is adapted from Nerfies, LLaVA and Mini-o3, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. We thank the Qwen team for giving us access to their models, and open-source projects including VisionThink.

    Usage and License Notices: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of Qwen and Gemini-2.5-Pro. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.