Monthly Archives: March 2015

Charles Leaver – The Reasons Why Narrow Indicators Of Compromise Are Not Sufficient For Total Endpoint Monitoring

Presented By Charles Leaver And Written By Dr Al Hartmann Of Ziften Inc.

 

The Breadth Of The Indicator – Broad Versus Narrow

An extensive report of a cyber attack will usually provide information of indicators of compromise. Frequently these are narrow in their scope, referencing a specific attack group as viewed in a particular attack on an enterprise for a minimal time period. Usually these narrow indicators are specific artifacts of an observed attack that might constitute specific proof of compromise by themselves. For the attack it suggests that they have high specificity, however frequently at the cost of low sensitivity to comparable attacks with different artifacts.

Basically, slim indicators provide really limited scope, and it is the reason that they exist by the billions in huge databases that are constantly broadening of malware signatures, network addresses that are suspicious, harmful registry keys, file and packet content snippets, file paths and intrusion detection guidelines and so on. The continuous endpoint monitoring system supplied by Ziften aggregates some of these third party databases and risk feeds into the Ziften Knowledge Cloud, to benefit from known artifact detection. These detection aspects can be used in real time as well as retrospectively. Retrospective application is vital with the short-term characteristics of these artifacts as hackers constantly render conceal the details about their cyber attacks to annoy this narrow IoC detection technique. This is the factor that a constant monitoring solution must archive monitoring results for a long period of time (in relation to industry reported normal attacker dwell times), to supply an enough lookback horizon.

Slim IoC’s have significant detection worth however they are largely ineffective in the detection of new cyber attacks by skilled hackers. New attack code can be pre tested against common business security solutions in laboratory environments to confirm non-reuse of artifacts that are detectable. Security products that operate just as black/white classifiers experience this weak point, i.e. by providing an explicit determination of malicious or benign. This method is very quickly averted. The protected organization is likely to be thoroughly attacked for months or years before any detectable artifacts can be recognized (after intensive investigation) for the particular attack instance.

In contrast to the simplicity with which cyber attack artifacts can be obscured by normal hacker toolkits, the characteristic techniques and strategies – the modus operandi – utilized by attackers have actually been sustained over numerous years. Typical strategies such as weaponized sites and docs, new service installation, vulnerability exploitation, module injection, delicate folder and computer system registry area modification, new scheduled tasks, memory and drive corruption, credentials compromise, harmful scripting and numerous others are broadly typical. The proper use of system logging and monitoring can find a lot of this characteristic attack activity, when appropriately paired with security analytics to concentrate on the highest threat observations. This completely gets rid of the chance for hackers to pre test the evasiveness of their destructive code, because the quantification of dangers is not black and white, however nuanced shades of gray. In particular, all endpoint risk is varying and relative, across any network/ user environment and time period, and that environment (and its temporal characteristics) can not be duplicated in any laboratory environment. The essential hacker concealment methodology is foiled.

In future posts we will examine Ziften endpoint risk analysis in greater detail, along with the vital relationship between endpoint security and endpoint management. “You can’t secure what you don’t manage, you cannot manage what you don’t measure, you can’t measure what you do not track.” Organizations get breached due to the fact that they have less oversight and control of their endpoint environment than the cyber attackers have. Keep an eye out for future posts…

 

 

How Ziften Continuous Endpoint Monitoring Would Have Dealt With Indicators Of Compromise Carbanak 3 – Charles Leaver

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 3 in a 3 part series

Below are excerpts of Indicators of Compromise (IoC) from the technical reports on the Anunak/Carbanak APT attacks, with comments on their discovery by the Ziften continuous endpoint monitoring service. The Ziften system has a focus on generic indicators of compromise that have actually corresponded for decades of hacker attacks and cyber security experience. IoC’s can be identified for any operating system such as Linux, OS X and Windows. Particular indicators of compromise also exist that show C2 infrastructure or specific attack code instances, but these are not utilized long term and not typically made use of again in fresh attacks. There are billions of these artifacts in the cyber security world with thousands being added each day. Generic IoC’s are ingrained for the supported operating systems by the Ziften security analytics, and the particular IoC’s are employed by the Ziften Knowledge Cloud from memberships to a number of market risk feeds and watch lists that aggregate these. These both have worth and will help in the triangulation of attack activity.

1. Exposed vulnerabilities

Excerpt: All observed cases utilized spear phishing emails with Microsoft Word 97– 2003 (. doc) files attached or CPL files. The doc files exploit both Microsoft Office (CVE-2012-0158 and CVE-2013-3906) and Microsoft Word (CVE- 2014-1761).

Comment: Not actually a IoC, critical exposed vulnerabilities are a major hacker manipulation and is a large red flag that increases the risk rating (and the SIEM priority) for the end point, particularly if other indicators are also present. These vulnerabilities are indicators of lazy patch management and vulnerability lifecycle management which results in a weakened cyber defense position.

2. Geographies That Are Suspect

Excerpt: Command and Control (C2) servers situated in China have been determined in this campaign.

Comment: The geolocation of endpoint network touches and scoring by geography both contribute to the danger score that drives up the SIEM priority. There are valid situations for having contact with Chinese servers, and some companies might have sites located in China, but this should be confirmed with spatial and temporal checking of abnormalities. IP address and domain information ought to be added with a resulting SIEM alarm so that SOC triage can be conducted quickly.

3. Binaries That Are New

Excerpt: Once the remote code execution vulnerability is successfully exploited, it sets up Carbanak on the victim’s system.

Remark: Any new binaries are always suspicious, but not all them must raise alarms. The metadata of images ought to be evaluated to see if there is a pattern, for example a brand-new app or a brand-new version of an existing app from an existing supplier on a most likely file path for that vendor and so on. Hackers will try to spoof apps that are whitelisted, so signing data can be compared in addition to size, size of the file and filepath etc to filter out apparent circumstances.

4. Uncommon Or Delicate Filepaths

Excerpt: Carbanak copies itself into “% system32% com” with the name “svchost.exe” with the file attributes: system, hidden and read-only.

Remark: Any writing into the System32 filepath is suspicious as it is a sensitive system directory, so it undergoes examination by checking abnormalities immediately. A classic anomaly would be svchost.exe, which is an essential system procedure image, in the uncommon location the com subdirectory.

5. New Autostarts Or Services

Excerpt: To guarantee that Carbanak has autorun privileges the malware creates a brand-new service.

Comment: Any autostart or brand-new service prevails with malware and is constantly examined by the analytics. Anything low prevalence would be suspect. If inspecting the image hash against industry watchlists results in an unknown quantity to the majority of antivirus engines this will raise suspicions.

6. Low Prevalence File In High Prevalence Directory

Excerpt: Carbanak creates a file with a random name and a.bin extension in %COMMON_APPDATA% Mozilla where it saves commands to be performed.

Comment: This is a traditional example of “one of these things is not like the other” that is easy for the security analytics to inspect (continuous monitoring environment). And this IoC is completely generic, has absolutely nothing to do with which filename or which folder is produced. Despite the fact that the technical security report notes it as a particular IoC, it is trivially genericized beyond Carabanak to future attacks.

7. Suspect Signer

Excerpt: In order to render the malware less suspicious, the current Carbanak samples are digitally signed

Comment: Any suspect signer will be treated as suspicious. One case was where a signer supplies a suspect anonymous gmail email address, which does not inspire confidence, and the danger score will be elevated for this image. In other cases no email address is supplied. Signers can be quickly noted and a Pareto analysis performed, to recognize the more versus less trusted signers. If a less trusted signer is discovered in a more sensitive directory then this is very suspicious.

8. Remote Administration Tools

Excerpt: There appears to be a preference for the Ammyy Admin remote administration tool for remote control thought that the attackers used this remote administration tool because it is typically whitelisted in the victims’ environments as a result of being used frequently by administrators.

Comment: Remote admin tools (RAT) always raise suspicions, even if they are whitelisted by the company. Checking of abnormalities would take place to recognize whether temporally or spatially each brand-new remote admin tool corresponds. RAT’s are subject to abuse. Hackers will always prefer to use the RAT’s of a company so that they can avoid detection, so they should not be given access each time just because they are whitelisted.

9. Patterns Of Remote Login

Excerpt: Logs for these tools show that they were accessed from two dissimilar IPs, probably used by the attackers, and located in Ukraine and France.

Remark: Always suspect remote logins, due to the fact that all hackers are presumed to be remote. They are likewise utilized a lot with insider attacks, as the insider does not want to be recognized by the system. Remote addresses and time pattern anomalies would be inspected, and this should reveal low prevalence use (relative to peer systems) plus any suspect geography.

10. Atypical IT Tools

Excerpt: We have actually likewise found traces of many different tools used by the hackers inside the victim ´ s network to gain control of additional systems, such as Metasploit, PsExec or Mimikatz.

Comment: Being sensitive apps, IT tools need to always be examined for abnormalities, since numerous hackers overturn them for destructive functions. It is possible that Metasploit could be used by a penetration tester or vulnerability scientist, however instances of this would be rare. This is a prime example where an unusual observation report for the vetting of security personnel would result in corrective action. It also highlights the issue where blanket whitelisting does not help in the recognition of suspicious activity.

 

Charles Leaver – The Second Part Of The Carbanak Case Study Explains The Efficiency Of Continuous Endpoint Monitoring

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 2 in a 3 part series

Continuous Endpoint Monitoring Is Very Effective

 

Convicting and blocking harmful software before it is able to jeopardize an endpoint is fine. However this technique is mainly ineffective against cyber attacks that have been pre checked to evade this type of approach to security. The genuine problem is that these evasive attacks are carried out by experienced human hackers, while traditional defense of the endpoint is an automated procedure by endpoint security systems that rely largely on standard anti-virus technology. The intelligence of people is more creative and flexible than the intelligence of machines and will always be superior to automatic machine defenses. This highlights the findings of the Turing test, where automated defenses are trying to adapt to the intellectual level of an experienced human hacker. At the current time, artificial intelligence and machine learning are not advanced enough to fully automate cyber defense, the human hacker is going to be victorious, while those infiltrated are left counting their losses. We are not residing in a sci-fi world where machines can out think people so you should not think that a security software suite will automatically take care of all of your issues and avoid all attacks and data loss.

The only genuine way to prevent a resolute human hacker is with an undaunted human cyber defender. In order to engage your IT Security Operations Center (SOC) personnel to do this, they must have complete visibility of network and endpoint operations. This sort of visibility will not be accomplished with conventional endpoint anti-viruses suites, instead they are created to remain quiet unless implementing a capture and quarantining malware. This standard technique renders the endpoints opaque to security personnel, and the hackers utilize this endpoint opacity to hide their attacks. This opacity extends backwards and forwards in time – your security workers do not know what was running across your endpoint population in the past, or at this point in time, or what can be anticipated in the future. If persistent security workers discover hints that need a forensic look back to discover attacker characteristics, your antivirus suite will be unable to help. It would not have actually acted at the time so no events will have been recorded.

On the other hand, continuous endpoint monitoring is always working – supplying real time visibility into endpoint operations, supplying forensic look back’s to take action against brand-new proof of attacks that is emerging and spot indications earlier, and providing a baseline for regular patterns of operation so that it understands exactly what to expect and notify any irregularities in the future. Offering not just visibility, continuous endpoint monitoring provides informed visibility, with the application of behavioral analytics to discover operations that appear irregular. Irregularities will be continuously evaluated and aggregated by the analytics and reported to SOC staff, through the organization’s security information event management (SIEM) network, and will flag the most worrying suspicious irregularities for security workers attention and action. Continuous endpoint monitoring will enhance and scale human intelligence and not replace it. It is a bit like the old game on Sesame Street “One of these things is not like the other.”

A kid can play this game. It is simplistic because most items (called high prevalence) resemble each other, but one or a small amount (known as low prevalence) are not the same and stand out. These dissimilar actions taken by cyber bad guys have been quite consistent in hacking for decades. The Carbanak technical reports that noted the signs of compromise are good examples of this and will be talked about below. When continuous endpoint monitoring security analytics are enacted and show these patterns, it is simple to acknowledge something suspicious or uncommon. Cyber security workers will have the ability to perform rapid triage on these unusual patterns, and quickly identify a yes/no/maybe reaction that will differentiate uncommon but known to be good activities from malicious activities or from activities that need extra tracking and more insightful forensics examinations to validate.

There is no way that a hacker can pre test their attacks when this defense application is in place. Continuous endpoint monitoring security has a non-deterministic threat analytics part (that alerts suspect activity) along with a non-deterministic human aspect (that performs alert triage). Depending on the existing activities, endpoint population mix and the experience of the cyber security workers, cultivating attack activity may or may not be discovered. This is the nature of cyber warfare and there are no assurances. But if your cyber security fighters are equipped with continuous endpoint monitoring analytics and visibility they will have an unreasonable advantage.

 

 

Charles Leaver – Why Continuous Endpoint Monitoring Is Best – Carbanak Case Study Part One

Presented By Charles Leaver And Written By Dr Al Hartmann

 

Part 1 in a 3 part series

 

Carbanak APT Background Particulars

A billion dollar bank raid, which is targeting more than a hundred banks across the world by a group of unidentified cyber wrongdoers, has actually been in the news. The attacks on the banks started in early 2014 and they have actually been broadening around the world. The majority of the victims suffered devastating breaches for a variety of months across a number of endpoints prior to experiencing monetary loss. Most of the victims had executed security measures that included the application of network and endpoint security software, however this did not offer a lot of caution or defense against these cyber attacks.

A variety of security businesses have produced technical reports about the incidents, and they have actually been codenamed either Carbanak or Anunak and these reports noted signs of compromise that were observed. The businesses consist of:

Fox-IT of Holland
Group-IB from Russia
Kaspersky Laboratory of Russia

This post will function as a case study for the cyber attacks and address:

1. The reason that the endpoint security and the traditional network security was unable to identify and prevent the attacks?
2. Why continuous endpoint monitoring (as provided by the Ziften solution) would have warned early about endpoint attacks and then set off a reaction to prevent data loss?

Conventional Endpoint Security And Network Security Is Inefficient

Based upon the legacy security design that relies excessively on obstructing and prevention, conventional endpoint and network security does not offer a balanced of blocking, prevention, detection and response. It would not be difficult for any cyber criminal to pre test their attacks on a limited number of conventional endpoint security and network security services so that they could be sure an attack would not be discovered. A number of the hackers have actually researched the security services that were in place at the victim organizations and then ended up being proficient in breaking through unnoticed. The cyber criminals understood that most of these security products only react after the occasion however otherwise will do nothing. Exactly what this means is that the typical endpoint operation stays primarily opaque to IT security workers, which indicates that malicious activity becomes masked (this has already been examined by the hackers to avoid detection). After an initial breach has actually taken place, the malicious software can extend to reach users with greater privileges and the more sensitive endpoints. This can be quickly accomplished by the theft of credentials, where no malware is required, and standard IT tools (which have been white listed by the victim organization) can be used by cyber criminal created scripts. This means that the presence of malware that can be identified at endpoints is not used and there will be no red flags raised. Standard endpoint security software is too over reliant on looking for malware.

Traditional network security can be manipulated in a comparable method. Hackers test their network activities first to prevent being identified by widely distributed IDS/IPS rules, and they thoroughly monitor typical endpoint operation (on endpoints that have been jeopardized) to hide their activities on a network within normal transaction durations and typical network traffic patterns. A new command and control infrastructure is produced that is not registered on network address blacklists, either at the IP or domain levels. There is not much to give the hackers away here. Nevertheless, more astute network behavioral evaluation, specifically when associated with the endpoint context which will be talked about later in this series of posts, can be a lot more effective.

It is not time to give up hope. Would continuous endpoint monitoring (as provided by Ziften) have offered an early caution of the endpoint hacking to begin the process of stopping the attacks and prevent data loss? Find out more in part two.