Internet-Draft | Network Anomaly Detection Framework | October 2024 |
Graf, et al. | Expires 23 April 2025 | [Page] |
This document describes the motivation and architecture of a Network Anomaly Detection Framework and the relationship to other documents describing network symptom semantics and network incident lifecycle.¶
The described architecture for detecting IP network service interruption is generic applicable and extensible. Different applications are being described and exampled with open-source running code.¶
This note is to be removed before publishing as an RFC.¶
Discussion of this document takes place on the Operations and Management Area Working Group Working Group mailing list (nmop@ietf.org), which is archived at https://mailarchive.ietf.org/arch/browse/nmop/.¶
Source for this draft and an issue tracker can be found at https://github.com/ietf-wg-nmop/draft-ietf-nmop-network-anomaly-architecture/ .¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 23 April 2025.¶
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Today's highly virtualized large scale IP networks are a challenge for network operation to monitor due to its vast number of dependencies. Humans are no longer capable to verify manually all dependencies end to end in an appropriate time.¶
IP networks are the backbone of today's society. We individually depend on networks fulfilling the purpose of forwarding our IP packets from A to B at any time of the day in a timely fashion. A loss of connectivity for a short period of time has manyfold implications. From unable to browse the web, watch a soccer game, access the company intranet or even in life threatening situations being no longer able to reach emergency services. Where increased packet forwarding delay due to congestion, depending on the real-time character of the network application, have none to severe impact on the performance of the application.¶
Networks are in general deterministic. However, the usage of networks only somewhat. Humans, as in a large group of people, are somehow predictable. There are time of the day patterns in terms of when we are eating, sleeping, working or leisure. And these patterns are potentially changing depending on age, profession and cultural background.¶
When operational or configurational changes in connectivity services are happening, the objective therefore is to detect interruption at network operation faster than the users using those connectivity services.¶
In order to achieve this objective, automation in network monitoring is required since the amount or people operating the network are simply outnumbered by the amount of people using connectivity services.¶
This automation needs to monitor network changes holistically by monitoring all 3 network planes simultaneously for a given connectivity service and detect whether that change is service disruptive, received packets are no longer forwarded to the desired destination, or not. A change in control and management plane indicate a network topology change. Where a change in the forwarding plane describe how the packets are being forwarded. Or in other words, control and management plane changes can be attributed to network topology state changes where forwarding plane to the outcome of these network topology state changes.¶
Since changes in networks are happening all the time due to the vast number of dependencies, a scoring system is needed to indicate whether the change is disruptive, the amount of transport sessions, flows, are affected and whether such interruptions are usual or exceptional.¶
Such objectives can be achieved by applying checks on network modelled time series data containing semantics describing their dependencies across network planes. These checks can be based on domain knowledge, in essence, how networks should work, or on outlier detection techniques that identify measurements deviating significantly from the norm due to human factors.¶
The described scope does not take the connectivity service intent into account nor does it verify whether the intent is being achieved all the time. Changes to the service intent causing service disruptions are therefore considered service disruptions where on monitoring systems taking the intent into account this is considered as intended.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
This document defines the following terms:¶
Outlier Detection: Is a systematic approach to identify rare data points deviating significantly from the majority.¶
Service Disruption Detection (SDD): The process of detecting a service degradation by discovering anomalies in network monitoring data.¶
Service Disruption Detection System (SDDS): A system allowing to perform SDD.¶
Additionally it makes use of the terms defined in [I-D.ietf-nmop-terminology] and [I-D.netana-nmop-network-anomaly-lifecycle].¶
The following terms are used as defined in [I-D.ietf-nmop-terminology] :¶
System¶
Resource¶
Characteristic¶
Change¶
Detect¶
Event¶
State¶
Relevance¶
Occurrence¶
Incident¶
Problem¶
Symptom¶
Cause¶
Consolidation¶
Alarm¶
Transient¶
Intermittent¶
Figure 2 in Section 3 of [I-D.ietf-nmop-terminology] shows characteristics of observed operational network telemetry metrics.¶
Figure 4 in Section 3 of [I-D.ietf-nmop-terminology] shows relationships between, state, relevant state, problem, symptom, cause and alarm.¶
Figure 5 in Section 3 of [I-D.ietf-nmop-terminology] shows relationships between problem, symptom and cause.¶
The following terms are used as defined in [I-D.netana-nmop-network-anomaly-lifecycle] :¶
Outlier Detection, also known as anomaly detection, describes a systematic approach to identify rare data points deviating significantly from the majority. Outliers can manifest as single data point or as a sequence of data points. There are multiple ways in general to classify anomalies, but for the context of this draft, the following three classes are taken into account:¶
For each outlier a score between 0 and 1 is being calculated. The higher the value, the higher the probability that the observed data point is an outlier. Anomaly detection: A survey [VAP09] gives additional details on anomaly detection and its types.¶
Knowledge-based anomaly detection, also known as rule-based anomaly detection, is a technique used to identify anomalies or outliers by comparing them against predefined rules or patterns. This approach relies on the use of domain-specific knowledge to set standards, thresholds, or rules for what is considered "normal" behavior. Traditionally, these rules are established manually by a knowledgeable network engineer. Forewardlooking, these rules can be formally, human and machine readable, expressed in ontologies, Section 5.3 of [I-D.mackey-nmop-kg-for-netops] based on RFC like document defined network protocol behaviour and derived network symptoms with defined patterns.¶
Additionally, in the context of network anomaly detection, the knowledge-based approach works hand in hand with the deterministic understanding of the network, which is reflected in network modeling. Components are organized into three network planes: the Management Plane, the Control Plane, and the Forwarding Plane. A component can relate to a physical, virtual, or configurational entity, or to a sum of packets belonging to a flow being forwarded in a network.¶
Such relationships can be modelled in a Digital Map to automate that process. [I-D.havel-nmop-digital-map-concept] examples a concept and [I-D.havel-nmop-digital-map] an implementation for such network modelled relationships.¶
It can also be modelled in a Knowledge Graph Section 5 of [I-D.mackey-nmop-kg-for-netops] where ontologies can be used to augment the relationships among different network elements in the network model.¶
The Data Mesh [Deh22] Architecture distinguishes between operational and analytical data. Operational data refers to collected data from operational systems. While analytical data refers to insights gained from operational data.¶
In terms of network observability, semantics of operational network metrics are defined by IETF and are categorized as described in the Network Telemetry Framework [RFC9232] in the following three different network planes:¶
The Service Disruption Detection process generates analytical metrics describing symptom and outlier pattern of the connectivity service disruption.¶
The obeserved symptoms are categorized into a semantic triple as described in [W3C-RDF-concept-triples]: action, reason, cause. Where the object is the action, decribing the change in the network. Where the reason is the predicate, explaining why this changed occured and where the subject is the cause which triggered that change.¶
Symptom definitions are described in Section 3 of [I-D.netana-nmop-network-anomaly-semantics] and outlier pattern semantics in Section 4 of [I-D.netana-nmop-network-anomaly-lifecycle] and expressed YANG information models.¶
However the semantic triples could also be expressed with the Semantic Web Technology Stack in RDF, RDFS and OWL definitions as described in Section 6 of [I-D.mackey-nmop-kg-for-netops]. Together with the ontology definitions described in Section 2.3 a Knowledge Graph can be created describing the relationship between the network state and the observed symptom.¶
A system architecture aimed at detecting service disruptions is typically built upon multiple components, for which design choices need to be made. In this section, we describe the main components of the architecture, and delve into considerations to be made when designing such componenents in an implementation.¶
The system architecture is illustrated in Figure 1 and its main components are described in the following subsections.¶
A service inventory is used to obtain a list of the connectivity services for which Anomaly Detection is to be performed. A service profiling process may be executed on the service in order to define a configuration of the service disruption detection approach and parameters to be used.¶
Based on this service list and potential preliminary service profiling, a configuration of the Service Disruption Detection is produced. It defines the set of approaches that need to be applied to perform SDD, as well as parameters that are to be set when executing the algorithms performing SDD per se.¶
As the service lives on, the configuration may be adapted as a result of an evolution of the profiling being performed, as the result of a postmortem analysis being produced as a result of an event impacting the service, or the occurrence of false positives being raised by the alarm system.¶
Collection of network monitoring data involves the management of the subscriptions to network telemetry on nodes of the network, and the configuration of the collection infrastructure to receive the monitoring data produced by the network.¶
The monitoring data produced by the collection infrastructure is then streamed through a message broker system. for further processing.¶
Networks tend to produce extremely large amounts of monitoring data. To preserve scaling and reduce costs, decisions need to be made on the duration of retention of such data in storage, and at which level of storage they need to be kept. A retention time need to be set on the raw data produced by the collection system, in accordance to their utility for further used. This aspect will be elaborated in further sections.¶
Aggregation is the process of producing data upon which detection of a service disruption can be performed, based on collected network monitoring data.¶
Pre-processing of collected network monitoring data is usually performed so as to produce input for the Service Disruption Detection component. This can be achieved in multiple ways, depending on the architecture of the SDD component. As an example, the granularity at which forwarding data is produced by the network may be too high for the SDD algorithms, and instead be aggregated into a coarser dimension for SDD execution.¶
A retention time also needs to be decided upon for Aggregated data. Note that the retention time must be set carefully, in accordance with the replay ability requirement discussed in Section 3.8.¶
Service Disruption Detection processes the aggregated network data in order to decide whether a service is degraded to the point where network operation needs to be alerted of an ongoing problem within the network.¶
Two key aspects need to be considered when designing the SDD component. First, the way the data is being processed needs to be carefully designed, as networks typically produce extremely large amounts of data which may hinder the scalability of the architecture. Second, the algorithms used to make a decision to alert the operator need to be designed in such a way that the operator can trust that a targeted Service Disruption will be detected (no false negatives), while not spamming the operator with alarms that do not reflect an actual issue within the network (false positives) leading to alarm fatigue.¶
Two approaches are typically followed to present the data to the SDD system. Classically, the aggregated data can be stored in a database that is polled at regular intervals by the SDD component for decision making. Alternatively, a streaming approach can be followed so as to process the data while they are being consumed from the collection component.¶
For SDD per-se, two families of algorithms can be decided upon. First, knowledge based detection approaches can be used, mimicking the process that human operators follow when looking at the data. Machine Learning based outlier detection based approaches to detect deviations from the norm.¶
Some input to SDD is made of established knowledge of the network that is unrelated to the dimensions according to which outlier detection is performed. For example, the knowledge of the network infrastructure may be required to perform some service disruption detection. Such data need to be rendered accessible and updatable for use by SDD. They may come from inventories, or automated gathering of data from the network itself.¶
As rules cannot be crafted specifically for each customer, they need to be defined according to pre-established service profiles. Processing of monitoring data can be performed in order to associate each service with a profile. External knowledge on the customer can also help in associating a service with a profile.¶
For a profile, a set of strategies is defined. Each strategy captures one approach to look at the data (as a human operator does) to observe if an abnormal situation is arising. Strategies are defined as a function of observed outliers as defined in Section 2.2 .¶
When one of the strategies applied for a profile detects a concerning outlier or combined outlier, an alarm needs to be raised.¶
Depending on the implementation of the architecture, a scheduler may be needed in order to orchestrate the evaluation of the alarm levels for each strategy applied for a profile, for all service instances associated with such profile.¶
Machine learning-based anomaly detection can also be seamlessly integrated into such SDDS. Machine learning is commonly used for detecting outliers or anomalies. Typically, unsupervised learning is widely recognized for its applicability, given the inherent characteristics of network data. Although machine learning requires a sizeable amount of high-quality data and considerable advanced training, the advantages it offers make these requirements worthwhile. The power of this approach lies in its generalizability, robustness, ability to simplify the fine-tuning process, and most importantly, its capability to identify anomaly patterns that might go unnoticed to the human observer.¶
Storage may be required to execute SDD, as some algorithms may be relying on historical (aggregated) monitoring data in order to detect anomalies. Careful considerations need to be made on the level at which such data is stored, as slow access to such data may be detrimental to the reactivity of the system.¶
When the SDD component decides that a service is undergoing a disruption, a relevant-state change notification needs to be sent to the alarm and problem management system as shown in Figure 4 in Section 3 of [I-D.ietf-nmop-terminology]. Multiple practical aspects need to be taken into account in this component.¶
When the issue lasts longer than the interval at which the SDD component runs, the relevant-state change mechanism should not create multiple notifications to the operator, so as to not overwhelm the management of the issue. However, the information provided along with the alarm should be kept up to date during the full duration of the issue.¶
Validation and refinement are performed during Postmortem.¶
From an Anomaly Detection Lifecycle point of view as described in [I-D.netana-nmop-network-anomaly-lifecycle], the Service Disruption Detection Configuration evolves over time, iteratively, looping over three main phases: detection, validation and refinement.¶
The Detection phase produces the alarms that are sent to the Alarm and Problem Management System and at the same time it stores the network anomaly and symptom labels into the Label Store. This enables network engineers to review the labels to validate and edit them as needed.¶
The Validation stage is typically performed by network engineers reviewing the results of the detection and indicating which symptoms and network anomalies have been useful for the identification of problems in the network. The original labels from the Service Disruption Detection are analyzed and an updated set of more accurate labels is provided back to the label store.¶
The resulting labels will be then provided back into the Network Anomaly Detection via its refinement capabilities: the refinement is about the update of the Service Disruption Detection configuration in order to improve the results of the detection (e.g. false positives, false negatives, accuracy of the boundaries, etc.).¶
When a service disruption has been detected, it is essential for the human operator to be able to analyze the data which led to the raising of an Alarm. It is thus important that a SSDS preserves both the data which led to the creation of the alarm as well as human understandable information on why the data led to the raising of an alarm.¶
In early stages of operations or when experimenting with a SDDS, it is common that the parameters used for SDD are to be fined tuned. This process is facilitated by designing the SDDS architecture in a way that allows to rerun the SDD algorithms on the same input.¶
Data retention, as well as its level, need to be defined in order not to sacrifice the ability of replaying SDD execution for the sake of improving its accuracy.¶
Note to the RFC-Editor: Please remove this section before publishing.¶
This section records the status of known implementations.¶
This architecture have been developed as part of a proof of concept started in September 2022 first in a dedicated network lab environment and later in December 2022 in Swisscom production to monitor a limited amount of 16 L3 VPN connectivity services.¶
At the Applied Networking Research Workshop at IRTF 117 the architecture was the first time published in the following academic paper: [Ahf23].¶
Since December 2022, 20 connectivity service disruptions have been monitored and 52 false positives due to time series database temporarily not being real-time and missing traffic profiling, comparing to previous week was not applicable, occurred. Out of 20 connectivity service disruptions 6 parameters where monitored and 3 times 1, 8 times 2, 6 times 3, 2 times 4 parameters recognized the service disruption.¶
A real-time streaming based version has been deployed in Swisscom production as a proof of concept in June 2024 monitoring approximate >12'000 L3 VPN's concurrently. Improved profiling capabilities are currently under development.¶
The authors would like to thank Alex Huang Feng, Ahmed Elhassany and Vincenzo Riccobene for their valuable contribution.¶
The authors would like to thank Qin Wu, Ignacio Dominguez Martinez-Casanueva and Adrian Farrel for their review and valuable comments.¶