My company's IT department sends phishing email to train employees not to fall for such scams. Somehow, this seems like the wrong way to approach the problem, but I cannot quite put my finger on why. I know phishing and other scams are a problem now, but there must be a better way to handle this than to play such games with your employees. Have you noticed this trend?
Before I get to your question, you may find it helpful to read my October 2019 Communications Kode Vicious column, "What Is a CSO Good For?"a I think you will find whoever runs security where you work is cut from the same tattered cloth as the person I described there.
Yes, phishing is a problem, and as with all problems in tech, there are good solutions and there are the carpetbaggers who come along for the ride. Whoever thought playing "gotcha" with their own employees—who are forced to use the outsourced garbage email systems from Google and Microsoft that betray the promise of distributed communication—should be drawn and quartered on the over-manicured lawn of their Silicon Valley campuses. But, alas, I hear such reprimands have gone out of style.
Of course, any person who looks at these entrapment attempts objectively will see them for what they are: mean-spirited and abusive. Do you teach a child fire is hot by putting their hand in a fire? I don't think so. But that does not mean you cannot make money building and selling such systems, so they continue to proliferate, annoying people daily and teaching them nothing.
When people are busy and overloaded with email, a million instant messaging apps, and that scourge, Slack, it is no wonder they occasionally get caught by a phishing attack targeted at them by an insider—their own IT security team. After all, who better to craft a phishing message than someone inside the organization who earns points from management for every person entrapped in this way? A typical phishing message from Prince Whoever promising oil riches is relatively easy to spot and ignore, but something that looks like it is from the CEO is something a person is going to look at and maybe even click on.
The challenge of providing a safe communications environment in the face of such social engineering attacks is not just the technology; it is also people. As anyone who has done serious work in computer security knows, the biggest problems are between the keyboard and the chair, or, in modern parlance, the handset and the face. Most people—by which I mean people who are not paranoid security types—by default trust other people and are willing to give them the benefit of the doubt.
Take, for example, the practice of "tailgating," in which unauthorized people enter a business by following the person in front of them. No amount of training seems to convince most people not to be "nice" and hold doors open for others even when they do not know them. The only effective solution to tailgating is to have guards enforce badge compliance at each allowed point of entry and to lock all the other entrances.
Any person who looks at these entrapment attempts objectively will see them for what they are: mean-spirited and abusive.
Creating a similar solution for email simply is not practical, as part of the point of email is anyone can communicate with anyone else so long as they know the correct destination address. If I were to be so vintage as to send a paper letter to another person, the postal service, at least in countries that are not autocracies, would not open the envelope to check the message inside; it would simply deliver the letter. It is the recipient's responsibility to act or not act upon my generous offer to share in the wealth of my sadly deceased mother, the Princess of Neverheardofit. Can you imagine what would happen if the postal service purposely sent out fake mail solicitations with a phone number to call, and then, when the unwary recipients called, they were given a lecture about how they should not have called that number?
Because email works on the same principle of trusting the recipient not to fall for scams, it has all the same pitfalls. For now, the carpetbaggers of security have the upper hand. We have built systems so complex that they are common targets for abuse, and those who engage in checkbook security—just buying whatever the carpetbaggers are selling—are going to keep giving them the money your company founders have begged, borrowed, or stolen from the VCs. It probably will take a lawsuit from someone fired for being caught in such an internal scam to stop this scourge, but KV is not one to have high hopes in this or any other area.
The only thing KV has seen that helps in these cases—the ones where human foibles trump technological fixes—is good, direct, and honest one-on-one or small-group training. I do not mean the ridiculous videos employees are forced to watch annually about these topics but conversations with people who can explain these issues to any kind of audience. The best security programs are not run from within a dedicated security group but consist of participants from all parts of an organization, who are helped and guided by security professionals in how to explain these issues to those they work with.
Such a program is often referred to as embedding, and it is a much better model and more effective than any alternative. Several organizations have worked in this way, embedding security people and ideas into each group, both technical and non-technical. In fact, in a past life, KV had to help both a legal team and a set of C-level executives understand basic computer security. Thankfully, this experience was not recorded and it went a bit better than the boardroom scene in the movie Dogma.
It is rare to find such security programs in industry, but they do exist. The only way to solve human problems, it seems, is with the human touch.
Naveen Agarwal, Scott Renfro, and Arturo Bejar
Phishing for Solutions
Hack for Hire
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.
No entries found