Sometimes you have to think and act like a criminal to disrupt and defeat the activities of the real bad guys. Conversely, it is only by thinking like “us”—the law-abiding majority of individuals—that cyber criminals manage to be so successful in the first instance.
They know, for example, that if they compromise our individual or corporate security through hacking into our servers, individual emails, or social media accounts, we are unlikely to report it or even tell anyone because of the potential individual embarrassment and the corporate and financial risk of brand damage. Better to keep it to ourselves and avoid doing it again.
That is exactly what the cyber criminals expect and rely upon us to do. And while your errors are kicked into the long grass and you attempt to add other layers of security, you are once again vulnerable to other cyber threats, potentially by another cyber criminal or gang with whom the original hacker has shared your details on a Dark Web forum. And so it goes on. Our unwillingness to report and share threat details as part of our cyber attack response plan creates the perfect storm—it is like always crashing in the same car.
Cyber Threats by the Numbers
Two recent pieces of research highlight the issue. Intellectual property experts MarkMonitor conducted a poll revealing that 45 percent of consumers have been a victim of some form of cyber crime, with 65 percent of those sampled choosing not to report the cyber threats to authorities. The research also found that one in six of these consumers have lost funds due to online fraud, with 20 percent losing in excess of £1,000 (~ $1,300).
In another survey carried out by Get Safe Online (GSO), the figures show that on average cyber crime left each person over the age of sixteen in the UK £210 (~ $272) worse off. But the actual cost could be much higher; the current figures only take into account incidents registered with the UK national reporting center Action Fraud. More than a third of those who told GSO they had been victims of online crime admitted they had not reported the incident—meaning the overall amount lost in the UK could be much higher. More than half of the GSO reported victims (53 percent) had received fraudulent emails or messages that attempted to direct them to websites where their personal information could have been stolen, while just under 30 percent said they had been contacted by someone who was trying to trick them into giving away personal information. Of the cyber threats carried out, false requests to reset social media account passwords was the most common fraud experienced (20 percent), closely followed by emails impersonating legitimate companies attempting to solicit personal information (17 percent).
According to the MarkMonitor findings, our way of fighting back is not to use those digital services in the future. This cyber attack response plan, or rather, lack of one, is somewhat like the phenomenon of British restaurant patrons who are unhappy with their meals: when asked by a waiter if the food is to their satisfaction, they always smile and reply in the affirmative. The reality is that they are not enjoying the meal and pledge never to return because they don’t like the embarrassment of making a fuss.
More than 20 percent of the victims hacked through online accounts experienced dissatisfaction with the brand involved. This impact on brand reputation was reflected in the fact that when asked about recent high-profile cyber attacks, 71 percent of consumers said they believed these events damaged an organization’s reputation, 65 percent said they thought it decreased trust in the brand, and a further 53 percent stated people wouldn’t engage with the brand in future.
These surveys highlight the issues raised in LP Magazine Europe in 2016 by cyber-expert Paul C. Dwyer, the man behind Cyber Risk International and the Cyber Threat Task Force, which do dare to share details of retail cyber security threats. The Task Force is like a network of cyber-good-guys operating to fight against the cyber-bad-guys.
“Today, cyber security is about the functions of risk management, governance, legal, and compliance as it is to do with technical security issues. It is simply not a fair fight. We believe businesses instead should mandate a suitable senior person to join the Cyber Threat Task Force. This network can collaborate to deal with cyber threats, thus protecting businesses,” said Dwyer. “Our current problem is that there are too many cyber silos—we simply do not share information in the same way that cyber criminals do.” In other words, Dwyer is saying that there is now a need to fight fire with fire, and that means taking a different, less-constrained approach to avoid crashing in the same car.
The Trolley Problem
The world of automotive technology is not a million miles away from the issues we are talking about. Cyber terrorists’ ability to hack cars’ “infotainment” systems to take control of steering and braking caused one US brand to recall three million vehicles in 2015. This is because companies such as Google and just about every other vehicle manufacturer in the world are experimenting with autonomous vehicles no longer driven by people, but by algorithms.
A report published in late October 2016 suggested that the first self-driven cars would be unmarked to avoid potential cyber-bullying from other drivers wanting to take them on. The parallel here is the observance of rules. The driverless vehicle will be programmed to follow the rules of the road and drive at the correct speed limits, making it a potential target. It might be a form of cyber attack in reverse—the technologically challenged drivers breaking the rules of the road in an attempt to bully the digitally controlled car.
In the nineteenth century, such behavior would have been called Ludditism. But in Silicon Valley, there is growing concern that the autonomous vehicle will not become a reality because the rules that govern it could have catastrophic implications for other road users. The digital geeks and cyber experts love the technology but believe that it should be agnostic and free from restrictive rule-making that will fetter its pure development.
At the root of this potential disagreement is the “trolley problem,” which posits a life-and-death situation where taking no action ensures the death of certain people, and taking another action ensures the death of several others. The trolley problem was largely a philosophical puzzle until recently, when its core conundrum emerged as a real algorithmic hairball for manufacturers of autonomous vehicles.
Our current model of driving places “agency,” or social responsibility, squarely on the shoulders of the driver. If you’re operating a vehicle, you’re responsible for what that vehicle does. Hit a group of schoolkids because you were reading a text? Your responsiblity. Drive under the influence of alcohol and plow into oncoming traffic? You’re going to jail, if you survive.
But autonomous vehicles relieve drivers of that agency, replacing it with algorithms that respond according to pre-determined rules. Exactly how those rules are determined is where the messy bits show up. In a modified version of the trolley problem, imagine you’re cruising along in your autonomous vehicle when a team of Pokémon Go players runs out in front of your car. Your vehicle has three choices: Swerve into oncoming traffic, which will almost certainly kill you. Swerve across a sidewalk and drive over an embankment, where the fall will most likely kill you. Or continue straight ahead, which would save your life, but most likely kill a few people along the way. What to do? Well, if you had been driving, it is a basic instinct for you to swerve to avoid the people. But Mercedes-Benz, along with just about every other auto manufacturer that runs an advanced autonomous driving program, has made a rules-driven, customer-centric decision. It would continue its journey straight ahead.
Mercedes is a brand that for more than a century has meant safety, security, and privilege for its customers. So its automated software will choose to protect its passengers above all others. And let’s be honest—who would want to buy an autonomous car that might choose to kill you in any given situation?
As one Silicon Valley commentator said, “It’s pretty easy to imagine that every single automotive manufacturer will adopt Mercedes’ philosophy. But where does that leave us—a fleet of autonomous robot killers, all logically making decisions that favor their individual customers over societal good? It sounds far-fetched, but spend some time considering this scenario, and it becomes clear that we have a lot more planning to do before we can unleash this new form of robot agency on the world.”
It’s messy, difficult work, and it likely requires us to rethink core assumptions about how roads are built, whether we need (literal) guardrails to protect us, and whether (or what kind of) cars should even be allowed near pedestrians and inside congested city centers. In short, we most likely need an entirely new plan for transit, one that deeply rethinks the role automobiles play in our lives.
Collaboration against Cyber Threats Is Key
Now apply this logic to cyber crime. The rules that govern our current cyber attack response plan allow the criminals latitude as we second-guess our behaviors and do not report our own stupidity. In the same way, rules that govern autonomous cars to protect their driver safety are worst for society as a whole.
In loss prevention terms, there needs to be a broader debate between the technology experts and the retailers as to how cyber-crime fighting can be designed into programs, rather than trying to apply Band-Aids to open wounds after cyber criminals have accessed individual or corporate data and accounts. While the cyber-security experts work in silos to protect their own reputations, the rules that govern and regulate do not apply in the criminal hyperspace, where customer details are openly shared and bought off the Dark Web.
What Dwyer is trying to do is a significant start. The task force’s aim is to keep it simple: be honest and share your pain points with the network that can help. It moves away from the silo approach to one of collaborating on known cyber threats and MOs so that the network of good guys can, at the very least, disrupt the activities of the bad hackers. It requires no rules or regulations other than the mandate of the business to bring their intelligence to the table, details that will be able to help others. It is the intelligence-led start of the journey away from always crashing in the same car by taking control of the trolley so that it does not kill anyone, including the driver. But it requires the main protagonists to move away from the old model of confidentiality to one that dares and cares enough to share.
This article was originally published in LP Magazine Europe in 2016 and was updated March 12, 2018.