- Stateful protocol analysis
IDPS technologies use many methodologies to detect incidents. Most IDPS technologies use multiple detection methodologies, either separately or integrated, to provide more broad and accurate detection.
A signature is a pattern that corresponds to a known threat. Signature-based detection is the process of comparing signatures against observed events to identify possible incidents.
- A telnet attempt with a username of “root”, which is a violation of an organization’s security policy
- An e-mail with a subject of “Free pictures!” and an attachment filename of “freepics.exe”, which are characteristics of a known form of malware
- An operating system log entry with a status code value of 645, which indicates that the host’s auditing has been disabled.
Anomaly-based detection is the process of comparing definitions of what activity is considered normal against observed events to identify significant deviations.
An IDPS using anomaly-based detection has profiles that represent the normal behavior of such things as users, hosts, network connections, or applications. The profiles are developed by monitoring the characteristics of typical activity over a period of time. An initial profile is generated over a period of time (typically days, sometimes weeks) sometimes called a training period.
Profiles for anomaly-based detection can either be static or dynamic.
- A static profile will eventually become inaccurate, so it needs to be regenerated periodically.
- A dynamic profile is adjusted constantly as additional events are observed.
Stateful protocol analysis
Some vendors use the term “deep packet inspection” to refer to performing some type of stateful protocol analysis, often combined with a firewall capability that can block communications determined to be malicious.
- This publication uses the term “stateful protocol analysis” because it is appropriate for analyzing both network-based and host-based activity, whereas “deep packet inspection” is an appropriate term for network-based activity only.
- Also, historically there has not been consensus in the security community as to the meaning of “deep packet inspection”.
Stateful protocol analysis is the process of comparing predetermined profiles of generally accepted definitions of benign protocol activity for each protocol state against observed events to identify deviations.
Unlike anomaly-based detection, which uses host or network-specific profiles, stateful protocol analysis relies on vendor-developed universal profiles that specify how particular protocols should and should not be used.
The “stateful” in stateful protocol analysis means that the IDPS is capable of understanding and tracking the state of network, transport, and application protocols that have a notion of state.
- For example, when a user starts a File Transfer Protocol (FTP) session, the session is initially in the unauthenticated state. Unauthenticated users should only perform a few commands in this state, such as viewing help information or providing usernames and passwords.
- An important part of understanding state is pairing requests with responses, so when an FTP authentication attempt occurs, the IDPS can determine if it was successful by finding the status code in the corresponding response. Once the user has authenticated successfully, the session is in the authenticated state, and users are expected to perform any of several dozen commands. Performing most of these commands while in the unauthenticated state would be considered suspicious, but in the authenticated state performing most of them is considered benign.
Stateful protocol analysis can identify unexpected sequences of commands, such as issuing the same command repeatedly or issuing a command without first issuing a command upon which it is dependent.
Another state tracking feature of stateful protocol analysis is that for protocols that perform authentication, the IDPS can keep track of the authenticator used for each session, and record the authenticator used for suspicious activity. This is helpful when investigating an incident. Some IDPSs can also use the authenticator information to define acceptable activity differently for multiple classes of users or specific users.
The “protocol analysis” performed by stateful protocol analysis methods usually includes reasonableness checks for individual commands, such as minimum and maximum lengths for arguments. If a command typically has a username argument, and usernames have a maximum length of 20 characters, then an argument with a length of 1000 characters is suspicious. If the large argument contains binary data, then it is even more suspicious.
Stateful protocol analysis methods use protocol models, which are typically based primarily on protocol standards from software vendors and standards bodies (e.g., Internet Engineering Task Force [IETF] Request for Comments [RFC]). The protocol models also typically take into account variances in each protocol’s implementation.
- Many standards are not exhaustively complete in explaining the details of the protocol, which causes variations among implementations.
- Also, many vendors either violate standards or add proprietary features, some of which may replace features from the standards.
- For proprietary protocols, complete details about the protocols are often not available, making it difficult for IDPS technologies to perform comprehensive, accurate analysis.
- As protocols are revised and vendors alter their protocol implementations, IDPS protocol models need to be updated to reflect those changes.
- The primary drawback to stateful protocol analysis methods is that they are very resource-intensive because of the complexity of the analysis and the overhead involved in performing state tracking for many simultaneous sessions.
Another serious problem is that stateful protocol analysis methods cannot detect attacks that do not violate the characteristics of generally acceptable protocol behavior, such as performing many benign actions in a short period of time to cause a denial of service.
Yet another problem is that the protocol model used by an IDPS might conflict with the way the protocol is implemented in particular versions of specific applications and operating systems, or how different client and server implementations of the protocol interact.
Source: NIST SP 800-94
- A process refers to the code and data in memory segments into which the operating system loads a program. Simply put, a process is a program that is loaded into the memory to be executed by the processor.
- A thread is the minimum execution unit of code managed by the operating system. In a modern operating system, a process can be divided into threads. Even on a single processor system, threads can be switched between time slots to support preemptive multitasking.
Multi-programming (one by one)
Multi-programming, typically on a uni-processor system, is a capability of the operating system that is capable of loading multiple programs into the main memory to be executed. As this is a legacy feature, multiple programs can be loaded into memory, but the processor can execute only one process at a time. Image the early days when computer operators feed and load programs in punch cards into the main memory of a computer system; multi-programming is an achievement.
The definition of a task or job varies. Some also call a process as a task or job. In Windows, a job is a group of processes, while a task is a thread-based construct that supports both asynchronous (futures or callbacks) and parallel processing.
Multi-tasking is the capacity of concurrency, that is, doing more than one thing at a time. Concurrency can be perceivable experience (preemptive or cooperative multi-tasking) or parallel real-time executions (multi-processing).
Multi-threading is a multi-tasking implementation that supports fast switching between threads so that users can perceive multiple tasks or applications performed “at the same time” on a uni-processor system. It’s even more powerful on a multi-processor system.
Multi-processing is the multi-tasking capability fulfilled through a multi-processor system.
- Symmetric multiprocessing (SMP): multiple processors controlled by a single operating system.
- Massively parallel processing (MPP): a collection of multi-processor systems with its own operating system works as a unit.
You are reviewing logs on a web server and find the following entry:
[24/Feb/2020:00:05:36 +0800] “GET /load?image=../../../etc/shadow%00 HTTP/1.0” 200
Which of the following is the most possible vulnerability on the webserver?
A. The diagonal of the attack surface higher than risk appetite
B. Misconfiguration without due care
C. Continuous monitoring not automated by the “crond” daemon
D. Path traversal by adversaries
Your company sells toys online and ships globally. The online shopping website would send the original password back to the cell phone if the customer forgot the password. Which of the following is the best cryptographic algorithm used to protect the password at rest?
A. 3DES (Triple Data Encryption Algorithm)
B. Salted SHA (Secure Hash Algorithms)
C. HMAC (Hashed Message Authentication Code)
D. Hardware token
Your company is a direct bank that relies entirely on internet banking; its shares are public-traded. You are exercising due diligence surveying applicable laws and regulations to your company. Which of the following has a profound effect on corporate governance and holds directors and officers personally liable for the accuracy of financial statements?
Information systems are either made in-house or bought from external entities. To build or purchase a secure information system or any of its components falls in the discipline of Systems Security Engineering, or Security Engineering for short.
- Systems Engineering is a discipline of applying knowledge to create or acquire a system that is composed of interrelated elements collaborating for a common purpose throughout the system development life cycle (SDLC), or system life cycle (SLC). A life cycle is a collection of predefined stages and processes. “Development” here implies construction or procurement.
- Security Engineering is a specialty discipline of systems engineering. It addresses the protection needs or security requirements throughout the system life cycle.
Building a system component involves a development life cycle (e.g. software development life cycle), as a portion of the system development life cycle, while acquiring one comprises a procurement life cycle as well.
Your organization decides to implement an on-premise CRM system that will be supported and maintained by a service provider under a two-year fixed-price service contract. To cope with the business dynamics and stay flexible, which of the following is the best contract arrangement to engage with the service provider?
A. Specify service level requirements (SLR) in the service contract
B. Separate the service level agreement (SLA) from the service contract
C. Preserve the right to audit to enforce supply chain security
D. Require only competent and certified engineers to fulfill this contract
- Compliance is the fulfillment of specified requirements. (ISO 2394)
- Conformance is the fulfillment of specified requirements. (ISO 19105)
- Conformity is the fulfillment of a requirement. (ISO 22301)
Literally, compliance and conformance are identical. However, some consider compliance as forced adherence to specified requirements but conformance as voluntary adherence. Conformity is a related concept that emphasizes the fulfillment of a “single” requirement.
Your organization shall preserve accounting transactions for at least ten years per regulatory requirements. After conducting data analytics, you discover that transactions stored in the database for more than one year and might be reused or queried account for 5% only. Which of the following is the least concern in terms of enforcing the regulatory mandate?
A. Hierarchical storage management (HSM)
B. Data retention policy
C. Backup validation
D. Offsite tape vaulting
Your organization conducts full backup on Sundays and incremental backup on weekdays and Saturdays, all at midnight and supported by a highly automated tape library and offsite tape vaulting. An internal auditor asked the backup operator to restore tapes to a spare server to verify the effectiveness of the backup. Which of the following assessment methods does the internal auditor employ?