Being curious

19 Jan 2019

Social Engineering Protecting Systems from Users

Copy of a paper I wrote as part of my postgraduate studies on Social Engineering.

Introduction

Social engineering is a combination of underpinning theory, skills and techniques. When used together with guiding intent they will manipulate a person into taking actions that may or may not be in their best interest (Hadnagy, 2011). They can be used by any practitioner wanting to guide others down a path of behaviour.

In the context of information security, social engineering is used by attackers to circumvent technical controls by focusing on the human element (Pekka Tetria, 2013) (David Airehrour, 2018). A social engineering practitioner will fabricate a scenario based on gathered information, choice of target and desired outcome. The outcome will involve a user (or users) taking inappropriate actions (Pekka Tetria, 2013).

As a simple example people rely heavily on mobile phones for day to day tasks, including authorising transactions. While stealing a mobile phone adds complexity, it can be simpler to socially engineer a phone provider to gain control of the service (Wiggers, 2017).

Social Engineering is effective because people inherently want to trust (George V. Hulme, 2017) and be helpful (Gragg, 2002). People don’t want to appear sceptical of another person’s actions (George V. Hulme, 2017). Given it is based on human psychology and does not have a technical underpinning, existing technical controls will provide limited mitigation of the threat (Tobac, 2018). Overall when people are presented with a believable scenario, they can be guided to a target outcome.

Addressing social effectively is possible via adopting a defence in depth approach. This requires addressing social engineering as an organisation risk (Gragg, 2002), putting the appropriate foundational elements in place, making people aware and committing to ongoing improvement.

How does Social Engineering allow hackers to gain access to systems or escalate privileges?

Information security attacks are motivated by a combination of financial, ideological, political or prestige factors (Hosburgh, 2017). Use of social engineering allows an attacker to focus on human elements to bypass existing technical security controls. Often it can be simpler for an attacker to ask for things persuasively than technically compromise a target.

A social engineering attack has three high level steps. These are data gathering, fabrication or design and persuasion or execution (Pekka Tetria, 2013). An overall attack may involve multiple social engineering attempts intermingled with technical attempts. Approaches used are at the discretion of the attacker.

Conducting a social engineering attack requires information on the target. Here it is important to remember all information can have value (Hadnagy, 2011). Developing a successful attack requires an understanding of the target and their motivations. It is easier to persuade a person to do something when they have no reason to suspect the attacker and see what’s being asked as being in their best interest (Tobac, 2018).

Once an understanding of the target has been gained the attack needs to be fabricated or designed. Here the attacker will incorporate information gathered along with knowledge and techniques of human psychology and persuasion (IPSpecialist Ltd., 2018) to achieve a target outcome. Key principles of human psychology and persuasion are:

  • Reciprocity deals with people’s tendency to return favours. Social Engineers will aim to do favour for the target and then ask for one in return. (Tobac, 2018)
  • Commitment and consistency deals with people following through on something once they have committed. Social Engineers will aim to have a target make a commitment and work to have them follow through (Yuri Diogenes, 2018)
  • Social Proof is about people acting in a manner like others are doing. Social Engineers will aim to show others appearing a certain way to persuade their target to conform (Tobac, 2018).
  • Authority looks at people’s tendency to obey authority figures. Social Engineers may assume the role of an authority figure to direct their target to act (Yuri Diogenes, 2018).
  • Liking is about people being willing to help others they like or find attractive. Social Engineers will appear attractive and likable (Gragg, 2002).
  • Scarcity deals with creating demand through a perceived lack of supply. Social Engineers can create a perceived shortage of an item in turn to drive people to act (Tobac, 2018).
  • Overloading is about attempting to overload a target with information in order to reduce their decision-making abilities (Hadnagy, 2011).
  • Strong affect is about attempting to trigger a heightened emotional state in the target to reduce their decision-making abilities (Gragg, 2002).

The goal with fabricating and designing is to devise a scenario under which the target (either user or organisation) is likely to respond to and deliver the attackers intended outcome (Hadnagy, 2011). The fabricated attack will generally be active or passive (Pekka Tetria, 2013).

An active attack involves the attacking assuming a pre-text and personally interacting with the target. A pre-text is an assumed identity used by the attacker for the purpose of persuading the target (Hadnagy, 2011). Examples of pre-texts can include an important user, helpless user or technical expert (Tarallo, 2015).

A passive attack involves the target interact with technology middle point and the attacker harvesting the output. Examples of computer attacks can include watering hole, phishing and publishing of malicious apps or websites (IPSpecialist Ltd., 2018).

Finally, the end attack must be conducted and target persuaded. This is where the social engineer uses the fabricated attack to achieve the desired outcome. This can either be a short term (at the time of the attack) or something done over a longer period. Approach here will depend on the fabricated attack and skill of the social engineer (Hadnagy, 2011).

It’s important to note that a single social engineering attack may achieve all the target outcomes in one pass. Alternatively, an overall attack may multi-phase (Pekka Tetria, 2013) and require multiple social engineering attacks and/or contribute towards an overall technical outcome. A multi-phase attack will incorporate information gained in order to better refine or target follow-up attack stages.

Challenges in combating Social Engineering attacks

There are a number of key challenges in combatting social engineering attacks. These centre around social engineering being a human psychology-based attack, user awareness of the problem and lack of understanding in the people attempting to mitigate it. Social engineering attacks target the human elements of a system using persuasion (PhD, 2009) and manipulation (Gragg, 2002). Also without an understanding of the problem and how to fix it, attempting to mitigate it will not be effective (Maria Bada, 2014).

People are susceptible to persuasion and psychological triggers. These include reciprocity, commitment and consistency, social proof, authority, liking, scarcity (PhD, 2009), strong affect and overloading (Gragg, 2002). Counting this requires them requires shifting user behaviour and teaching people to be politely paranoid (Tobac, 2018).

People are not sufficiently concerned to behave in a safe way (M. Junger, 2017). People are focused on completing specific tasks and do see security as a secondary goal. Anything that gets in the way of completing that goal will be seen as an interruption with effort taken to bypass it (Maria Bada, 2014). As an example, users are shown to click through certificate warning pages on a web browser when accessing web sites rather than understand the warning.

As with multiple areas users suffer from an optimism bias. In this they have expectations that are better than reality (Sharot, 2011). Under this user will regard themselves as exempt (or reduced likelihood) from future risk for problems they have not encountered. As a result, they see the impact of encountering a hazard as less than it actually is. Research showed in a large proportion of identified theft incidents, targets voluntarily provided the information to attackers (M. Junger, 2017).

Lastly mis-targeted attempts at addressing the threat of social engineering can range from doing this to making things worse. People have been shown to disengage where they don’t see things as personally relevant to them (Maria Bada, 2014). Fear based appeals have also been shown to drive opposite behaviour (Ahluwalia, 2000).

Tools & Techniques for Mitigating Social Engineering attacks

Social engineering must first be recognised as a threat before it can be addressed (Hadnagy, 2011). Addressing social engineering requires a defence in depth approach that focuses on the human element (Gragg, 2002). Only with a strong security culture can social engineering be effectively mitigated.

To achieve these foundational elements must be in place. These include security policies, plans & procedures (Gragg, 2002) (Tarallo, 2015). Management support is needed to establish these and see them enforced (Winkler, 2017). Together these items give a solid reference point for staff to use in knowing what is and isn’t acceptable.

Business process must be appropriately designed and hardened. If processes are not appropriately designed users will find a way to circumvent them (Maria Bada, 2014). Process hardening needs to involve incorporate general security practices with additional focus (i.e. dual control, segregation of duties, call-back) taken for high risk items (Tobac, 2018). As with policy staff need to be supported in pushing back requests outside of the normal (Gragg, 2002).

Regular effective awareness campaigns on social engineering for all users must be completed. Providing awareness campaigns to some users and not others leaves a vulnerability that can be exploited via an attacker (Tobac, 2018). For awareness campaigns to be effective they must reflect the reality of social engineering attacks, be personally relevant (Wikipedia, 2018), not overload people & be engaging. People need to see the awareness campaign as being personally relevant to them, otherwise they will fail to act given competing priorities (Maria Bada, 2014).

As part of awareness campaigns users need to be made aware of how attackers operate (M. Junger, 2017). This can be addressed by picking examples of attacks from the real world (Winkler, 2017) that are relevant to the targeted users. Showing how real attackers operate makes people aware of information they are releasing (Olavsrud, 2010).

Awareness campaigns need to be kept brief and held regularly. Keeping the awareness campaigns brief prevents users from being overloaded. Attempting to teach everything may account to users learning nothing much (Kat Krol, 2012). Keeping the awareness campaigns regular ensures regular reinforcement. Holding them less frequency reduces the effectiveness (Jan-Willem Bullee, 2016).

Awareness campaigns must be engaging with the users. Simple web-based awareness campaigns can easily be treated as a compliance exercise (Pekka Tetria, 2013). Awareness campaigns types should be mixed up to include role play, face to face (Winkler, 2017) and be covered as part of security testing. Breaking down optimism bias may require tricking / fooling users as part of the exercise.

It needs to be understood by an organisation that some social engineering attacks will succeed. To combat this robust technical controls need to be in place along with social engineering being included as part of incident response capability (Hadnagy, 2011). Having robust technical controls ensures that technical attacks delivered via social engineering have limited opportunity to spread. An example of this is using multifactor authentication to limit the damage of credential harvesting. Including social engineering as part of incident response ensures an organisation has a plan to respond if and when a incident does occur.

The final defensive step is to continually test, learn and improve. The strength of the defences must continually be tested, findings need to be learnt from and improvements constantly made. Staff need to be supported through the process and not unduly penalised (Maria Bada, 2014). In addressing this understand that attacks are not static, they will evolve over time. For an organisation to effectively counter it, security must be seen as a continuous journey.

Conclusion

Using the theory, skills & techniques of social engineering in an information security attack can lead to success where a purely technical approach may fail. This is due to social engineering attacks addressing what is known of human psychology, principles of persuasion and associated cognitive biases. Given the nature of social engineering attacks a human response is needed to counter them (Hadnagy, 2011).

An attacker will execute a social engineering attack using three high level steps. These are data gathering, fabrication or design and persuasion or execution (Pekka Tetria, 2013). An overall attack may involve multiple social engineering attempts intermingled with technical attempts. Approaches used are at the discretion of the attacker.

Mitigating social engineering attacks is challenging. People inherently suffer from an optimism bias. In this they have expectations that are better than reality (Sharot, 2011). Under this user will regard themselves as exempt (or reduced likelihood) from future risk for problems they have not encountered. Furthermore, unless users see security as core to what they are doing, attempts to change this behaviour will be a nuisance.

Successfully mitigating social engineering requires taking a defence in depth approach (Winkler, 2017). Management support, foundational security elements, business process maturity, appropriate awareness campaigns (Hadnagy, 2011) & continuous improvement aspects must all be in place. Work done needs to focus on the quality of the outcomes and not just treating it as a tick box / compliance exercise (Maria Bada, 2014). Only once these items are done correctly can the threat of social engineering be appropriately mitigated in an organisation.

References

  • Ahluwalia, R. (2000). An Examination of Psychological Processes Underlying Resistance to Persuasion. Journal of Consumer Research, 217-232.
  • Allsopp, W. (2017). Advanced Penetration Testing | Hacking the World’s Most Secure Networks . Indianapolis, Indiana: John Wiley & Sons, Inc.
  • David Airehrour, N. V. (2018). Social Engineering Attacks and Countermeasures in the New Zealand Banking System: Advancing a User-Reflective Mitigation Model. MDPI.
  • Francois Mouton, L. L. (2014). Towards an Ontological Model Defining the Social Engineering Domain. ICT & Society (pp. 266-279). Pretoria, South Africa: University of Pretoria, Information and Computer Security Architecture Research Group.
  • George V. Hulme, J. G. (2017, Aug 3). What is social engineering? How criminals take advantage of human behavior. Retrieved from CSO Online: https://www.csoonline.com/article/2124681/social-engineering/what-is-social-engineering.html
  • Gragg, D. (2002, Dec). A Multi-Level Defense Against Social Engineering. Retrieved from SANS Institute - Reading Room: https://www.sans.org/reading-room/whitepapers/engineering/multi-level-defense-social-engineering-920
  • Hadnagy, C. (2011). Social Engineering The Art of Human Hacking. Indianapolis, Indiana: Wiley Publishing, Inc.
  • Hosburgh, M. (2017, Nov 13). What are the motives behind cyber-attackers? Retrieved from CGS Blog: https://www.cgsinc.com/blog/what-are-motives-behind-cyber-attackers
  • IPSpecialist Ltd. (2018). CEH V10 EC-Council Certified Ethical Hacker Comple Guide. London: IPSpecialist Ltd.
  • Jan-Willem Bullee, L. M. (2016). Telephone-based social engineering attacks: An experiment testing the success and time decay of an intervention. Proceedings of the inaugural Singapore Cyber Security R&D Conference (SG-CRC 2016), 107-114.
  • Kat Krol, M. M. (2012). Don’t work. Can’t work? Why it’s time to rethink security warnings. Conference on Risks and Security of Internet and Systems (CRiSIS), 1-8.
  • M. Junger, L. M.-J. (2017). Priming and warnings are not effective to prevent social engineering attacks. Computers in Human Behavior, 75-87.
  • Maria Bada, A. M. (2014). Cyber Security Awareness Campaigns: Why do they fail to change behaviour? Retrieved from University of Oxford - Department of Computer Science: http://www.cs.ox.ac.uk/files/7194/csss2015_bada_et_al.pdf
  • Mike Chapple, J. M. (2018). CISSP | Certified Information Systems Security Professional. Indianapolis, Indiana: John Wiley & Sons, Inc.
  • Neil, I. (2018). CompTIA Security+ Certification Guide. Birmingham: Packt Publishing.
  • Olavsrud, T. (2010, Oct 19). 9 Best Defenses Against Social Engineering Attacks. Retrieved from eSecurity Planet: https://www.esecurityplanet.com/views/article.php/3908881/9-Best-Defenses-Against-Social-Engineering-Attacks.htm
  • Pekka Tetria, J. V. (2013). Dissecting social engineering. Behaviour & Information Technology, 1014–1023.
  • PhD, R. B. (2009). Influence: The Psychology of Persuasion (Collins Business Essentials). HaperCollins e-books.
  • Sharot, T. (2011, Dec 6). The optimism bias. Current Biology, R941-R945. Retrieved from https://www.sciencedirect.com/science/article/pii/S0960982211011912
  • Simon, K. D. (2002). The Art of Deception. Indianapolis, Indiana: Wiley Publishing Inc.
  • Tarallo, H. M. (2015). Social Engineering – Countermeasures and controls to mitigate hacking. Ann Arbor: ProQuest LLC.
  • Tobac, R. (2018, Dec 18). KringleCon - Rachel Tobac, How I would Hack You: Social Engineering Step-by-Step. Retrieved from YouTube: https://www.youtube.com/watch?v=L5J2PgGOLtE
  • Wiggers, K. (2017, Jun 07). Here’s how to stop SIM fraudsters from draining your bank account. Retrieved from Digital Trends: https://www.digitaltrends.com/mobile/sim-swap-fraud-explained/
  • Wikipedia. (2018, Nov 20). Optimism bias. Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Optimism_bias
  • Wikipedia. (2019, Jan 9). Social engineering (security). Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Social_engineering_(security)
  • Winkler, I. (2017, Jun 22). 7 elements of a successful security awareness program. Retrieved from CSO Online: https://www.csoonline.com/article/2133408/data-protection/network-security-the-7-elements-of-a-successful-security-awareness-program.html
  • Yuri Diogenes, E. O. (2018). Cybersecurity - Attack and Defense Strategies. 2018: Packt Publishing.