This paper introduces a flexible Transformer-based model for detecting anomalies in system logs. By embedding log templates with a pre-trained BERT model and incorporating positional and temporal encoding, it captures both semantic and sequential context within log sequences. The approach supports variable sequence lengths and configurable input features, enabling extensive experimentation across datasets. The model performs supervised binary classification to distinguish normal from anomalous patterns, using a [CLS]-like token for sequence-level representation. Overall, it pushes the boundaries of log-based anomaly detection by integrating modern NLP and deep learning techniques into system monitoring.This paper introduces a flexible Transformer-based model for detecting anomalies in system logs. By embedding log templates with a pre-trained BERT model and incorporating positional and temporal encoding, it captures both semantic and sequential context within log sequences. The approach supports variable sequence lengths and configurable input features, enabling extensive experimentation across datasets. The model performs supervised binary classification to distinguish normal from anomalous patterns, using a [CLS]-like token for sequence-level representation. Overall, it pushes the boundaries of log-based anomaly detection by integrating modern NLP and deep learning techniques into system monitoring.

Transformer-Based Anomaly Detection Using Log Sequence Embeddings

Abstract

1 Introduction

2 Background and Related Work

2.1 Different Formulations of the Log-based Anomaly Detection Task

2.2 Supervised v.s. Unsupervised

2.3 Information within Log Data

2.4 Fix-Window Grouping

2.5 Related Works

3 A Configurable Transformer-based Anomaly Detection Approach

3.1 Problem Formulation

3.2 Log Parsing and Log Embedding

3.3 Positional & Temporal Encoding

3.4 Model Structure

3.5 Supervised Binary Classification

4 Experimental Setup

4.1 Datasets

4.2 Evaluation Metrics

4.3 Generating Log Sequences of Varying Lengths

4.4 Implementation Details and Experimental Environment

5 Experimental Results

5.1 RQ1: How does our proposed anomaly detection model perform compared to the baselines?

5.2 RQ2: How much does the sequential and temporal information within log sequences affect anomaly detection?

5.3 RQ3: How much do the different types of information individually contribute to anomaly detection?

6 Discussion

7 Threats to validity

8 Conclusions and References

\

3 A Configurable Transformer-based Anomaly Detection Approach

In this study, we introduce a novel transformer-based method for anomaly detection. The model takes log sequences as inputs to detect anomalies. The model employs a pretrained BERT model to embed log templates, enabling the representation of semantic information within log messages. These embeddings, combined with positional or temporal encoding, are subsequently inputted into the transformer model. The combined information is utilized in the subsequent generation of log sequence-level representations, facilitating the anomaly detection process. We design our model to be flexible: The input features are configurable so that we can use or conduct experiments with different feature combinations of the log data. Additionally, the model is designed and trained to handle input log sequences of varying lengths. In this section, we introduce our problem formulation and the detailed design of our method.

\ 3.1 Problem Formulation

We follow the previous works [1] to formulate the task as a binary classification task, in which we train our proposed model to classify log sequences into anomalies and normal ones in a supervised way. For the samples used in the training and evaluation of the model, we utilize a flexible grouping approach to generate log sequences of varying lengths. The details are introduced in Section 4

\ 3.2 Log Parsing and Log Embedding

In our work, we transform log events into numerical vectors by encoding log templates with a pre-trained language model. To obtain the log templates, we adopt the Drain parser [24], which is widely used and has good parsing performance on most of the public datasets [4]. We use a pre-trained sentence-bert model [25] (i.e., all-MiniLML6-v2 [26]) to embed the log templates generated by the log parsing process. The pre-trained model is trained with a contrastive learning objective and achieves state-ofthe-art performance on various NLP tasks. We utilize this pre-trained model to create a representation that captures semantic information of log messages and illustrates the similarity between log templates for the downstream anomaly detection model. The output dimension of the model is 384.

\ 3.3 Positional & Temporal Encoding

The original transformer model [27] adopts a positional encoding to enable the model to make use of the order of the input sequence. As the model contains no recurrence and no convolution, the models will be agnostic to the log sequence without the positional encoding. While some studies suggest that transformer models without explicit positional encoding remain competitive with standard models when dealing with sequential data [28, 29], it is important to note that any permutation of the input sequence will produce the same internal state of the model. As sequential information or temporal information may be important indicators for anomalies within log sequences, previous works that are based on transformer models utilize the standard positional encoding to inject the order of log events or templates in the sequence [11, 12, 21], aiming to detect anomalies associated with the wrong execution order. However, we noticed that in a common-used replication implementation of a transformer-based method [5], the positional encoding was, in fact, omitted. To the best of our knowledge, no existing work has encoded the temporal information based on the timestamps of logs for their anomaly detection method. The effectiveness of utilizing sequential or temporal information in the anomaly detection task is unclear.

\ In our proposed method, we attempt to incorporate sequential and temporal encoding into the transformer model and explore the importance of sequential and temporal information for anomaly detection. Specifically, our proposed method has different variants utilizing the following sequential or temporal encoding techniques. The encoding is then added to the log representation, which serves as the input to the transformer structure.

\

3.3.1 Relative Time Elapse Encoding (RTEE)

We propose this temporal encoding method, RTEE, which simply substitutes the position index in positional encoding with the timing of each log event. We first calculate the time elapse according to the timestamps of log events in the log sequence. Instead of using the log event sequence index as the position to sinusoidal and cosinusoidal equations, we use the relative time elapse to the first log event in the log sequence to substitute the position index. Table 1 shows an example of time intervals in a log sequence. In the example, we have a log sequence containing 7 events with a time span of 7 seconds. The elapsed time from the first event to each event in the sequence is utilized to calculate the time encoding for the corresponding events. Similar to positional encoding, the encoding is calculated with the above-mentioned equations 1, and the encoding will not update during the training process.

\

3.4 Model Structure

The transformer is a neural network architecture that relies on the self-attention mechanism to capture the relationship between input elements in a sequence. The transformer-based models and frameworks have been used in the anomaly detection task by many previous works [6, 11, 12, 21]. Inspired by the previous works, we use a transformer encoder-based model for anomaly detection. We design our approach to accept log sequences of varying lengths and generate sequence-level representations. To achieve this, we have employed some specific tokens in the input log sequence for the model to generate sequence representation and identify the padded tokens and the end of the log sequence, drawing inspiration from the design of the BERT model [31]. In the input log sequence, we used the following tokens: is placed at the start of each sequence to allow the model to generate aggregated information for the entire sequence, is added at the end of the sequence to signify its completion, is used to mark the masked tokens under the self-supervised training paradigm, and is used for padded tokens. The embeddings for these special tokens are generated randomly based on the dimension of the log representation used. An example is shown in Figure 1, the time elapsed for , and are set to -1. The log event-level representation and positional or temporal embedding are summed as the input feature of the transformer structure.

\ 3.5 Supervised Binary Classification Under this training objective, we utilize the output of the first token of the transformer model while ignoring the outputs of the other tokens. This output of the first token is designed to aggregate the information of the whole input log sequence, similar to the token of the BERT model, which provides an aggregated representation of the token sequence. Therefore, we consider the output of this token as a sequence-level representation. We train the model with a binary classification objective (i.e., Binary Cross Entropy Loss) with this representation.

\

:::info Authors:

  1. Xingfang Wu
  2. Heng Li
  3. Foutse Khomh

:::

:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
Bert Logo
Bert Price(BERT)
$0.01414
$0.01414$0.01414
-2.41%
USD
Bert (BERT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Treasury opens comment period to shape GENIUS Act into stablecoin regulation

Treasury opens comment period to shape GENIUS Act into stablecoin regulation

The post Treasury opens comment period to shape GENIUS Act into stablecoin regulation appeared on BitcoinEthereumNews.com. The U.S. Treasury Department launched a formal process to transform the newly enacted GENIUS Act into a framework of regulations for stablecoins, inviting the public and crypto industry to weigh in on key compliance issues. The department opened an advance notice of proposed rulemaking on Sept. 18, the first step in gathering feedback before drafting detailed rules. The move gives businesses, policymakers, and the public until Oct. 20 to respond to dozens of questions, including how issuers should custody reserves and how U.S. oversight compares to emerging foreign regimes. Illicit finance and oversight The Guiding and Establishing National Innovation for U.S. Stablecoins (GENIUS) Act, signed into law earlier this year, was the first major U.S. crypto legislation. The law directs Treasury and other agencies to establish standards for issuers, clarify tax treatment, and enforce anti-money laundering and sanctions compliance. Treasury officials highlighted that the rules must balance state and federal oversight while building mechanisms to detect illicit finance. The notice follows a separate request for input last month focused on anti-money laundering risks in digital assets. The public comment period also covers whether additional clarity is needed for reserve asset custody, how prohibitions on issuers should be structured, and how international frameworks should interact with U.S. regulations. Political and market context Republicans in Congress and federal regulators aligned with President Donald Trump have pressed for rapid rulemaking to position the United States as a global hub for digital finance. Lawmakers are also advancing a broader market structure bill, the Digital Asset Market Clarity Act, which has cleared the House and is under Senate discussion. Meanwhile, the industry is monitoring the economic backdrop, and some have raised concerns over whether it will continue to grow at its current pace. JPMorgan analysts recently cautioned that growth in stablecoins may plateau unless the overall…
Share
BitcoinEthereumNews2025/09/20 02:42
MAGACOIN FINANCE Surpasses $14M With Whale Inflows

MAGACOIN FINANCE Surpasses $14M With Whale Inflows

The post MAGACOIN FINANCE Surpasses $14M With Whale Inflows appeared on BitcoinEthereumNews.com. MAGACOIN FINANCE Crosses $14M With Whale Support The momentum around MAGACOIN FINANCE has been building all year, but the presale just delivered its biggest headline yet: more than $14 million raised, with large-scale investors from the DOGE and XRP ecosystems among those joining in. The figure establishes MAGACOIN FINANCE as a major player in the crypto market through its position as one of the most notable presales of 2025. The market environment of investors currently seeks projects that demonstrate both market performance and public interest, and MAGACOIN FINANCE has achieved this goal. The scale of inflows has already exceeded many expectations, and the names now joining are adding fuel to the fire. Whale Inflows Push Presale Higher The most surprising aspect of the presale campaign is the diverse group of people who have joined the effort. Reports show multiple whale wallets associated with DOGE and XRP holders are participating in the MAGACOIN FINANCE presale. The market draws retail investors who boost demand because professional capital starts investing at the beginning of the market. Whales tend to stay away from random trading activity before a sale occurs. The investors choose to support projects which have strong tokenomics and established structures and already exhibit growth potential following the presale phase. MAGACOIN FINANCE enters the presale because investors believe it will achieve success after its market listing. Structured Presale, Rapid Demand MAGACOIN FINANCE achieves its main progress through the implementation of its structured presale model. The system runs allocation rounds which define particular limits to generate an urgent feeling of requirement. The first sales batches sold out rapidly because each successive funding round increased prices which drove investors to invest before prices rose further. The $14 million threshold indicates that MAGACOIN FINANCE has surpassed the typical presale completion point which most projects stop…
Share
BitcoinEthereumNews2025/09/22 13:04
Why Smart Talent Acquisition Leaders are Choosing Nearshore Over Offshore: The 2026 Talent Geography Playbook

Why Smart Talent Acquisition Leaders are Choosing Nearshore Over Offshore: The 2026 Talent Geography Playbook

Last quarter, I watched a director of engineering at a Series B startup spend three weeks trying to fill a temporary Senior Backend Engineer role. The rate? $89
Share
Techbullion2026/01/21 06:13