Although he is not certain, Kevin Stecko, founder of 80sTees.com, believes a former employee hacked into his e-commerce system in 2013. With the benefit of hindsight, Stecko's takeaway is, "understand that no matter how small you think you are, there's a good chance you are being targeted by hackers on a regular basis. They are probing for any weakness in your systems, practices, or people."
While organizations increasingly are the victims of external hackers, many are finally realizing an uncomfortable truth: insider threats are and have long been a very real problem. Consequently, even as they are taking measures to protect their networks from external threats, there is a growing trend for companies to scrutinize their own employees more closely and take proactive measures to detect internal breaches.
A whopping 93% of respondent U.S. organizations believe they are vulnerable to insider threats, and plan to increase or maintain what they spend on information technology (IT) security and data protection, according to Vormetric's 2015 Insider Threat Report. Based on interviews by Harris Poll of "more than 800 senior business managers and IT professionals in major global markets, roughly half from the U.S. and the rest from the U.K., Germany, Japan, and ASEAN countries," 55% of respondents said their "privileged users" pose the biggest internal threat to their corporate data, followed by contractors and service providers (46%).
"Now you're seeing corporate America, as well as government, coming to grips with the need to be more comprehensive and proactive with approaching inside threats" by identifying individuals who present a "threat posture" and could potentially violate an organization's policies, says Scott Weber, managing director of Stroz Friedberg, a digital forensics consulting firm.
Insiders are much better positioned to gain access to corporate systems, Weber explains. "There's a trust factor; they know where the interesting information is, the crown jewels." At the same time, Weber adds, companies recognize the need to balance their concerns about internal threats with the protection of employee privacy. "The desire to obtain balance is part of every conversation we have," he says. "Employers want to trust employees and employees want to trust employers," and no one wants their ability to do their job and support themselves to be negatively impacted by someone inside the organization who has access to data.
With greater attention being focused on insider threats thanks to leaks of classified documents by Edward Snowden and Bradley (Chelsea) Manning, companies are seeking out technologies that use algorithms to develop employee profiles of what is 'normal' behavior versus actions that should send up a red flag. This puts them on the offense, rather than in a "mitigation posture," Weber says.
One of those red flags: an employee who normally accesses and transfers 100 files a week, who suddenly one day is transferring 1,000 files, says Mohan Koo, CEO of security company Dtex Systems. "If they actually rename those files before sending them out...or if they delete files after the transfer, that would trigger a higher level of warning," he explains. "It indicates they're trying to cover their tracks."
Other indicators of suspicious behavior may be the time of day (or night) an employee is accessing files, how often, and how long they look at them, Koo says. "All of sudden we might see a person using [files] after hours and 10 times during the night."
Dtex's technology is used by financial services firms to predict when a person is getting ready to leave an organization and steal intellectual property, he says. The software is deployed to all computers and devices an employee uses to access the corporate network and operates in the background, Koo says. "It's logging metadata so it's non-intrusive." The software does not scan files for content or look at what people are saying on social media or email, but rather scans apps they run, files they access, and what they do with them, as well as tracking and tracing the movement of files throughout the organization. Employees are aware the software is being used, he says.
One of the companies Dtex works with is a large financial exchange that deals with very sensitive market data. "We've been able to identify people getting ready to leave the organization [who were] searching for files they've never searched for before," says Koo. "We're looking for behaviors that are different from normal behaviors. Erratic behavior generally indicates a risk for the business."
Normal or baseline behavior is based on 90 days' worth of anonymous data that is studied to see which files and apps an employee typically uses. Dtex will then study the behavior of people who have resigned from an organization to see how their behavior changed in the weeks leading up to their leaving by applying behavioral analytics, Koo says.
"They will log on later and later in the morning and earlier in the evening, and the amount of work they do on their laptop [decreases] and becomes more personal," he notes. "We've successfully caught people trying to walk out with data before they leave."
Stroz Friedberg uses proprietary technology called SCOUT, which utilizes psycholinguistic algorithms to analyze electronic communications to assess things like word choice and frequency of use of particular words and phrases, in order to identify and measure risk indicators, Weber says. "Psycholinguistics is the science that involves the psychology of language," he explains. The algorithms are used to analyze language using different methods, and then that language is scored statistically in an automated way.
"We're looking for behaviors that are different from normal behaviors. Erratic behavior generally indicates a risk for the business."
For one client, Stroz Friedberg analyzed over 28 million messages from about 57,000 senders, Weber says. Once filters were applied, the messages were whittled down to about 1,700 email messages deemed "statically significant," but even then, analysts will not read every one, only those identified based on "scientific discretion. That is frankly more respectful of privacy," he says. "You would never use one source to make a decision on how to proceed with an employee; you're always going to use a layered approach" such as psycholinguistic language and technical indicators such as suspicious behaviors. "The key is to figure out what data is important and what data provides actionable intelligence and correlate that with as many indicators as possible before taking action."
Insider threats are not necessarily increasing but are more frequently publicized, perhaps due to growing awareness of cyberattacks in general, observes Tony Kolish, senior vice president of customer services at cyber security firm FireEye.
Koo agrees. He has seen outside organizations influencing people inside competitors to come work for them and bring proprietary data with them in the telecom, finance, pharmaceutical, and automotive industries, "so it really cuts across sectors."
IT employees are a particular target "because they have privileges around or know how to navigate systems, and they know what's being tracked and what isn't and how to get it out without the organization knowing about it," Koo says. "That's where visibility and knowing what people are doing becomes very, very critical."
Typically, FireEye is contacted after an attack has already occurred "and it's news to them, almost always," says Kolish. "When we go in and do a compromise assessment, [a hacker has] been in there on average longer than 200 days." While that is an alarmingly long period of time, Kolish says two years ago, the average was 267 days.
Like Dtex, FireEye's technology focuses on detecting behavioral anomalies, like employees accessing data they normally would not access, and doing so at odd times of day. Kolish sees data being stolen in the biotech sector "where there's been a fair amount of money to be made by stealing intellectual property related to drug trials."
FireEye emphasizes to customers that they cannot prevent all breaches. "There is no wall you're going to put up or depth of defense architecture ... that will prevent all breaches," says Kolish. What companies should instead be focusing on is detecting and containing security breaches as quickly as possible. "We're in defensive mode here, all of us, and attackers are always going to find something new. Take reasonable steps to prevent them."
He advises clients to make certain they are encrypting data and using two-factor authentication. Additionally, he says, companies should "identify the data you care about the most and make sure your eyes and ears are tuned to access of that data."
Lockheed Martin has gone a step further; last year, the aerospace manufacturer and military contractor developed an insider threat detection program to proactively detect and mitigate theft or misuse of intellectual property and trade secrets. The impetus was the growing trend of data theft, as well as economic and industrial espionage affecting corporate America, says Douglas Thomas, director of Counterintelligence Operations and Corporate Investigations.
The key to developing a successful insider threat detection program, say Thomas and Weber, is including team members from a cross-section of business units. Lockheed Martin's team includes employees from the company's human resources, information security, communications, legal, ethics, and privacy divisions, says Thomas, who previously served as principal deputy director of Counterintelligence for the U.S. government.
Team members analyze data "for indications of anomalous behavior and to determine where there might be a need for further investigation." They also use data to "proactively identify employees who might be at a higher risk for approach by foreign intelligence officers. This gives us the opportunity to get in front of those employees to provide defensive counterintelligence awareness training."
Thomas says the team also devotes significant time to employee training and awareness. They utilize technology developed at Lockheed Martin called LM WISDOM ITI, which he says analyzes the spectrum of employee attributes, behaviors, and actions that may be indicators of potential insider threat activity. It was built from best practices and feedback from seasoned counterintelligence professionals, as well data sources and modeling behavioral indicators.
"It's important to remember that there is no magic indicator of insider threat behavior," Thomas says. "Looking at previous cases in both government and industry, we can see there are some common pre-event behaviors of concern, including: attempts to expand access to information without clearance or need to know, sudden and unexplained reversal of [a] financial situation, repeated security violations or a blatant disregard for security policies, or a sudden or unexplained withdrawal from co-workers or decline in work performance."
The challenge for IT and security personnel is when upper management does not place a high priority on the importance of proactively detecting insider threats, says Kolish. Network security personnel often feel if software they recommend does not work, " 'it makes me look stupid'," he notes, "so you're less likely to see organizational change. There's less resistance to organizational change when the C-suite takes the perspective of [recognizing] there is risk to the company."
At the same time, companies can and should strike a balance between having a culture of security and respecting employee privacy, says Weber. "There's no reason for a person to be reading email until a red flag goes up," he says. There are people who may be concerned about the 'big brother' perception, but when used correctly, the tools are taking more of a scientific approach to studying individuals, he emphasizes.
"Everyone needs to step back and say when an individual causes material harm to an organization, it affects every single employee," says Weber. "There's an obligation to ensure the workplace and the assets of an organization are safe so people can continue to be safe and go to work."
The Insider Threat: Understanding the Cyber Security Threats from Personnel, Federal Technology Insider http://bit.ly/1Ka0w7V
Ramin, M., and Levy, Y.
Securing E-Learning Systems: A Case of insider Cyber Attacks and Novice IT Management in a Small University. Journal of Cases on Information Technology, 2006. Volume 8, Issue 4. http://bit.ly/1SpjaI0
Hong, J., Kim, J., and Cho, J.
The Trend of the Security Research for the Insider Cyber Threat. Security Technology, Communications in Computer and Information Science, Vol. 58, 2009, pp 100-107 http://bit.ly/1CFLoMS
Editors: Probst, C.W., Hunker, J., Bishop, M., and Gollman, D.
Insider Threats in Cyber Security. 2010, Springer Science+Business Media, LLC. ISBN 978-1-4419-7133-3 http://bit.ly/1gDvoB0
Anderson, D.F., Weaver, E.A., Zagonel, A., Cappelli, D., Gonzalez, J., Mojtahedzadeh, M., Moore, A.P., Rich, E., Sarriegui, J.M., Shimeall, T.J., and Stanton, J.M.
Preliminary System Dynamics Maps of the Insider Cyber-Threat Problem. 2005, Software Engineering Institute, Carnegie Mellon University http://bit.ly/1O8uHKG
Cyber Council: Insider Threat Task Force A Preliminary Examination of Insider Threat Programs in the U.S. Private Sector. 2013. Intelligence and National Security Alliance. http://1.usa.gov/1CFLCUr
©2015 ACM 0001-0782/15/11
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.
No entries found