Monthly Archives: April 2017

Charles Leaver – Why Using Edit Difference Is Essential Part Two

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we took a look at searching for destructive executables with edit distance (i.e., how many character edits it takes to make 2 text strings match). Now let’s take a look at how we can utilize edit distance to hunt for harmful domains, and how we can develop edit distance features that can be integrated with other domain name functions to identify suspect activity.

Here is the Background

Exactly what are bad actors doing with destructive domains? It may be merely utilizing a similar spelling of a typical domain name to fool reckless users into viewing advertisements or getting adware. Legitimate websites are gradually picking up on this method, in some cases called typo squatting.

Other destructive domains are the result of domain generation algorithms, which might be used to do all sorts of dubious things like avert countermeasures that block recognized compromised sites, or overwhelm domain servers in a dispersed DOS attack. Older variations use randomly-generated strings, while more advanced ones include tricks like injecting common words, further confusing protectors.

Edit distance can help with both use cases: let’s see how. Initially, we’ll leave out common domains, since these are typically safe. And, a list of regular domains supplies a standard for spotting abnormalities. One excellent source is Quantcast. For this conversation, we will stick to domains and avoid sub domains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each prospect domain name (input data observed in the wild by Ziften) to its prospective next-door neighbors in the same top level domain (the tail end of a domain name – classically.com,. org, and so on now can be almost anything). The standard task is to discover the nearby next-door neighbor in regards to edit distance. By discovering domains that are one step away from their nearby next-door neighbor, we can easily spot typo-ed domain names. By discovering domain names far from their neighbor (the stabilized edit distance we introduced in the initial post is beneficial here), we can also discover anomalous domains in the edit distance area.

What were the Outcomes?

Let’s take a look at how these outcomes appear in real life. Use caution when browsing to these domain names given that they might include harmful material!

Here are a couple of possible typos. Typo-squatters target well known domains because there are more possibilities somebody will visit. Numerous of these are suspect according to our risk feed partners, but there are some false positives too with cute names like “wikipedal”.

Here are some odd looking domain names far from their neighbors.

So now we have produced two helpful edit distance metrics for hunting. Not just that, we have three features to potentially add to a machine-learning design: rank of nearby next-door neighbor, range from neighbor, and edit distance 1 from neighbor, suggesting a danger of typo tricks. Other functions that might be used well with these include other lexical functions like word and n-gram distributions, entropy, and string length – and network functions like the total count of unsuccessful DNS demands.

Simplified Code that you can Play Around with

Here is a simplified version of the code to have fun with! Created on HP Vertica, however this SQL will probably run with many sophisticated databases. Keep in mind the Vertica editDistance function may differ in other implementations (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

Charles Leaver – Environments That Are Not Managed Correctly Will Not Be Secure And Vice Versa

Written by Charles Leaver Ziften CEO

 

If your business computing environment is not effectively managed there is no way that it can be completely protected. And you cannot efficiently manage those complicated enterprise systems unless there’s a strong feeling that they are safe and secure.

Some may call this a chicken-and-egg circumstance, where you do not know where to start. Should you start with security? Or should you start with system management? That is the incorrect approach. Think of this instead like Reese’s Peanut Butter Cups: It’s not chocolate initially. It’s not peanut butter first. Instead, both are blended together – and treated as a single delicious treat.

Many organizations, I would argue too many companies, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO team do not know each other, talk to each other just when definitely essential, have distinct budget plans, definitely have separate concerns, read various reports, and utilize different management platforms. On an everyday basis, what constitutes a job, a problem or an alert for one team flies entirely under the other team’s radar.

That’s bad, because both the IT and security teams must make assumptions. The IT team thinks that all assets are secure, unless somebody notifies them otherwise. For example, they assume that devices and applications have actually not been compromised, users have not escalated their privileges, and so-on. Likewise, the security group presumes that the servers, desktops, and mobiles are working correctly, operating systems and apps fully updated, patches have been used, etc

Given that the CIO and CISO teams aren’t speaking with each other, do not understand each others’ functions and goals, and aren’t using the very same tools, those assumptions may not be right.

And again, you can’t have a safe and secure environment unless that environment is appropriately managed – and you cannot manage that environment unless it’s protected. Or to put it another way: An unsecure environment makes anything you perform in the IT organization suspect and unimportant, and means that you cannot know whether the details you are seeing are appropriate or controlled. It may all be fake news.

How to Bridge the IT / Security Gap

Ways to bridge that gap? It sounds easy however it can be difficult: Guarantee that there is an umbrella covering both the IT and security groups. Both IT and security report to the exact same individual or structure someplace. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s state it’s the CFO.

If the company doesn’t have a safe and secure environment, and there’s a breach, the worth of the brand and the business can be reduced to absolutely nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t managed well, the company cannot work effectively, and the value drops. As we’ve discussed, if it’s not properly managed, it cannot be secured, and if it’s not protected, it cannot be well managed.

The fiduciary obligation of senior executives (like the CFO) is to safeguard the worth of business assets, which implies making certain IT and security talk to each other, comprehend each other’s concerns, and if possible, can see the very same reports and data – filtered and shown to be significant to their specific areas of duty.

That’s the thinking that we adopted with the development of our Zenith platform. It’s not a security management tool with IT abilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, designed equally around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT teams exactly what they require to do their jobs, and provides security teams exactly what they need too – without coverage gaps that might weaken presumptions about the state of enterprise security and IT management.

We have to ensure that our organization’s IT infrastructure is built on a safe and secure structure – and also that our security is executed on a well managed base of hardware, infrastructure, software and users. We cannot run at peak efficiency, and with full fiduciary obligation, otherwise.

Charles Leaver – More Working From Home Now So Constant Visibility Of The Endpoint Is A Must

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A study recently completed by Gallup found that 43% of Americans that were employed worked remotely for a few of their work time in 2016. Gallup, who has actually been surveying telecommuting patterns in the United States for practically a decade, continues to see more workers working beyond traditional offices and more of them doing this for a greater number of days out of the week. And, obviously the number of linked devices that the typical employee utilizes has jumped too, which assists encourage the convenience and desire of working far from the workplace.

This mobility definitely makes for happier employees, and one hopes more efficient employees, however the problems that these patterns represent for both systems and security operations teams must not be overlooked. IT systems management. IT asset discovery, and threat detection and response functions all take advantage of real time and historic visibility into user, device, application, and network connection activity. And to be really reliable, endpoint visibility and tracking ought to work no matter where the user and device are running, be it on the network (local), off the network but connected (remotely), or detached (not online). Present remote working patterns are significantly leaving security and operational teams blind to prospective problems and hazards.

The mainstreaming of these trends makes it even more hard for IT and security groups to restrict what was previously considered greater threat user behavior, for example working from a coffeehouse. However that ship has actually sailed and today security and systems management teams have to have the ability to thoroughly track device, network activity, user and application, detect abnormalities and inappropriate actions, and enforce suitable action or remediation regardless of whether an endpoint is locally linked, from another location linked, or disconnected.

Additionally, the fact that lots of employees now regularly gain access to cloud-based applications and assets, and have backup USB or network connected storage (NAS) drives at their homes further magnifies the requirement for endpoint visibility. Endpoint controls frequently supply the one and only record of remote activity that no longer always terminates in the corporate network. Offline activity presents the most extreme example of the need for constant endpoint monitoring. Plainly network controls or network monitoring are of little use when a device is running offline. The installation of a suitable endpoint agent is crucial to ensure the capture of very important security and system data.

As an example of the kinds of offline activities that could be spotted, a customer was recently able to monitor, flag, and report unusual habits on a business laptop. A high level executive moved large amounts of endpoint data to an unapproved USB drive while the device was offline. Because the endpoint agent had the ability to gather this behavioral data throughout this offline duration, the customer was able to see this unusual action and follow up appropriately. Through the continuous monitoring of the device, applications, and user behaviors even when the endpoint was detached, provided the client visibility they never had before.

Does your company have continuous tracking and visibility when employee endpoints are not connected? If so, how do you achieve this?

Charles Leaver – Be Prepared For These Consequences When Machine Learning Takes A Hold

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you study history you will see many examples of severe unintentional repercussions when new technology has been presented. It frequently surprises individuals that brand-new technologies may have wicked purposes in addition to the positive intentions for which they are launched on the market but it takes place on a very regular basis.

For instance, Train robbers using dynamite (“You think you utilized enough Dynamite there, Butch?”) or spammers using email. Just recently using SSL to hide malware from security controls has actually ended up being more typical just because the legitimate use of SSL has actually made this method more useful.

Since brand-new technology is typically appropriated by bad actors, we have no reason to believe this will not be true about the new generation of machine-learning tools that have reached the marketplace.

To what effect will there be misuse of these tools? There are probably a couple of ways that assailants might use machine-learning to their benefit. At a minimum, malware authors will test their new malware against the brand-new class of innovative danger protection solutions in a quest to modify their code so that it is less probable to be flagged as malicious. The effectiveness of protective security controls constantly has a half-life due to adversarial learning. An understanding of artificial intelligence defenses will help attackers become more proactive in decreasing the effectiveness of machine learning based defenses. An example would be an assailant flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being developed from that traffic. The goal of the assailant would be to trick the protector’s artificial intelligence tool into misclassifying traffic or to produce such a high degree of false positives that the defenders would dial back the fidelity of the signals.

Artificial intelligence will likely also be utilized as an attack tool by attackers. For example, some researchers forecast that assailants will use artificial intelligence methods to refine their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is especially troubling given the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a powerful economic incentive for assailants to embrace the strategies.

Anticipate the kind of breaches that deliver ransomware payloads to increase sharply in 2017.

The need to automate tasks is a significant driver of investment choices for both attackers and protectors. Machine learning guarantees to automate detection and response and increase the operational pace. While the technology will increasingly become a basic element of defense in depth methods, it is not a magic bullet. It ought to be understood that assailants are actively working on evasion approaches around machine learning based detection products while likewise utilizing machine learning for their own attack functions. This arms race will need defenders to progressively attain incident response at machine pace, additionally worsening the requirement for automated incident response abilities.