Details the Q-Former architecture: a 12-layer BERT-based model using 32 learnable query embeddings. These queries use cross-attention to extract visual information for MLLM input.Details the Q-Former architecture: a 12-layer BERT-based model using 32 learnable query embeddings. These queries use cross-attention to extract visual information for MLLM input.

Visual Prompt Generation: Cross-Attention in Q-Former

Abstract and 1 Introduction

  1. Related Work

    2.1. Multimodal Learning

    2.2. Multiple Instance Learning

  2. Methodology

    3.1. Preliminaries and Notations

    3.2. Relations between Attention-based VPG and MIL

    3.3. MIVPG for Multiple Visual Inputs

    3.4. Unveiling Instance Correlation in MIVPG for Enhanced Multi-instance Scenarios

  3. Experiments and 4.1. General Setup

    4.2. Scenario 1: Samples with Single Image

    4.3. Scenario 2: Samples with Multiple Images, with Each Image as a General Embedding

    4.4. Scenario 3: Samples with Multiple Images, with Each Image Having Multiple Patches to be Considered and 4.5. Case Study

  4. Conclusion and References

\ Supplementary Material

A. Detailed Architecture of QFormer

B. Proof of Proposition

C. More Experiments

\ Figure 7. Overview of QFormer

A. Detailed Architecture of QFormer

The architecture overview is depicted in Figure 7. Specifically, QFormer is initialized as a BERT-based model[8] comprising a total of L = 12 layers. In contrast to typical BERT models that process textual inputs, QFormer takes R = 32 learnable query embeddings as inputs. These embeddings are utilized to extract visual information from the input visual data during Stage-1 pretraining in BLIP2[22]. Subsequently, they serve as visual prompt embeddings for the LLM inputs after projection.

\ Inside the QFormer, each layer includes a self-attention module composed of a Multi-Head Attention component and a Forward module (consisting of Linear, LayerNorm, and Residual Connection). The cross-attention module, initialized with random values, is inserted every G layers, where learnable query embeddings interact with visual embeddings. In the main paper, for the sake of conciseness, we condensed the representation of the multi-head attention and forward modules into self(cross) attention modules. Furthermore, we exclusively illustrated the modifications made to the cross-attention module in MIVPG, as the self-attention modules remain unchanged. The final QFormer output is represented by the last layer’s query embeddings.

\ For a more comprehensive understanding, readers are encouraged to refer to [22].

\

:::info Authors:

(1) Wenliang Zhong, The University of Texas at Arlington ([email protected]);

(2) Wenyi Wu, Amazon ([email protected]);

(3) Qi Li, Amazon ([email protected]);

(4) Rob Barton, Amazon ([email protected]);

(5) Boxin Du, Amazon ([email protected]);

(6) Shioulin Sam, Amazon ([email protected]);

(7) Karim Bouyarmane, Amazon ([email protected]);

(8) Ismail Tutar, Amazon ([email protected]);

(9) Junzhou Huang, The University of Texas at Arlington ([email protected]).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
Prompt Logo
Prompt Price(PROMPT)
$0.05529
$0.05529$0.05529
-5.85%
USD
Prompt (PROMPT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now?

Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now?

The post Is Putnam Global Technology A (PGTAX) a strong mutual fund pick right now? appeared on BitcoinEthereumNews.com. On the lookout for a Sector – Tech fund? Starting with Putnam Global Technology A (PGTAX – Free Report) should not be a possibility at this time. PGTAX possesses a Zacks Mutual Fund Rank of 4 (Sell), which is based on various forecasting factors like size, cost, and past performance. Objective We note that PGTAX is a Sector – Tech option, and this area is loaded with many options. Found in a wide number of industries such as semiconductors, software, internet, and networking, tech companies are everywhere. Thus, Sector – Tech mutual funds that invest in technology let investors own a stake in a notoriously volatile sector, but with a much more diversified approach. History of fund/manager Putnam Funds is based in Canton, MA, and is the manager of PGTAX. The Putnam Global Technology A made its debut in January of 2009 and PGTAX has managed to accumulate roughly $650.01 million in assets, as of the most recently available information. The fund is currently managed by Di Yao who has been in charge of the fund since December of 2012. Performance Obviously, what investors are looking for in these funds is strong performance relative to their peers. PGTAX has a 5-year annualized total return of 14.46%, and is in the middle third among its category peers. But if you are looking for a shorter time frame, it is also worth looking at its 3-year annualized total return of 27.02%, which places it in the middle third during this time-frame. It is important to note that the product’s returns may not reflect all its expenses. Any fees not reflected would lower the returns. Total returns do not reflect the fund’s [%] sale charge. If sales charges were included, total returns would have been lower. When looking at a fund’s performance, it…
Share
BitcoinEthereumNews2025/09/18 04:05
QNT Technical Analysis Jan 21

QNT Technical Analysis Jan 21

The post QNT Technical Analysis Jan 21 appeared on BitcoinEthereumNews.com. QNT’s MACD histogram showing a positive trend and RSI stabilizing in the neutral zone
Share
BitcoinEthereumNews2026/01/21 23:54
SHIB Alert: First Three-Hour Death Cross Flashes on Chart in 2026, Is It Important?

SHIB Alert: First Three-Hour Death Cross Flashes on Chart in 2026, Is It Important?

The post SHIB Alert: First Three-Hour Death Cross Flashes on Chart in 2026, Is It Important? appeared on BitcoinEthereumNews.com. Shiba Inu is forming a death cross
Share
BitcoinEthereumNews2026/01/22 00:26