Using Game Theory in Deception Strategy for Cyber Security | Lucideus Research

Introduction:

Deception is an ancient art used by a lot of animals as an aid to escape or trick the prey by appearing safe to it. It pertains to the false behavior that a species uses in order to create an illusion for the opponent that the situation is normal. An example could be Uropyia Meticulodina (a species of moth) that looks like a dry leaf, which helps it to create an illusion of being one of the leaves in a pile of fallen leaves. The same strategy is used in the domain of cyber security such that the defender persuades an
adversary to believe that the fake information they are provided with is true. If an adversary is trapped
through this mechanism then this might be helpful as the defender will be able to understand the
techniques and tools used and the severity level of the attack the defender is prone to.

In this blog I am going to describe the idea that is presented in the research article titled “A Game
Theoretic Analysis of Deception over Social Networks Using Fake Avatars”, and my take on the same.
In this paper, the type of deception used for identifying the malicious activities in different networks is
“Fake Avatar”. In order to attract the attacker, the avatar is supposed to be lucrative enough for the
attacker and the continuous monitoring is required from the defender’s side.

The author has formulated a deception game by applying the signalling game mechanism around
the fake avatar. The underlying model considers a scenario in which the defender doesn’t have
information about the type of the user she is dealing with. The output of the model defines the best
decision of the defender in the scenario under consideration. This is to identify the chances that the
system interacting with the avatar is a compromised one. If the chances of it being compromised
crosses a threshold, then the alert is triggered. It has been proved that the technique of deception
can help in detection and in a way, prevention from attacks at early stages. So, this technique creates
negative benefits for the attacker and positive ones for the defender.


How game theory is useful:

Game theory is a tool that studies the interactions between the participants in the overall analysis.
The results account not only for cost and benefits attached to the participants but also the results
of the analysis of interaction. It also serves as a life saving drug in situations when one party is not
aware of the type of individual it is dealing with i.e. when there is imperfect information. Under such
scenarios, game theory specifies optimal action that should be taken based on the beliefs of the
receiver.

In a signaling game there are two classes of players: the sender and the receiver. Sender sends the
signal to the receiver who analyses it in order to make decisions about the action to be taken. Let us
consider a scenario in which there are two players - the Employer (sender) and the job seeker (receiver).
There exist two types of job seekers: good or bad. The employer doesn’t know about the type of job
seeker she is dealing with. But it is already known that the job market consists of 40 percent of good
job seekers and 60 percent bad ones. The action set of employer consists of two actions
- {Hire, Don’t Hire}. Since the employer doesn’t know about the type of applicant he is dealing with,
he will have to devise a method in order to segregate these two types to get some idea.

There are two types of equilibrium that exist in signaling games: Pooling and Separating.
Pooling equilibrium exists when all types of senders send the same message irrespective of their type.
On the other hand, Separating equilibrium exists when the senders send messages corresponding to
their type. In case of same signal it will be difficult to segregate these two types. A threshold level of
probability depending on the prior belief is calculated given the payoffs attached to all the scenarios.
The position of the calculated probability with respect to the threshold defines the optimal action
(about the hiring) for the employer. A similar situation arises when the user doesn’t know whether
the system she is dealing with is compromised or not.  


Basic structure of the game:

From defender’s side, the fake avatar will be taking the decision - to raise the alert or not.
The fake avatar is employed for the external user and the internal users do have the knowledge
and details about the fake avatar.  In this model, the sender is a defender deploying fake avatar and
the receiver is an external user, which is of two types - normal user and attacker. The avatar is not
aware of the type of the external user she is dealing with. The external user can either send a
suspicious signal or a non-suspicious one and the defender will decide to raise the alert or not.
The figure represented above shows the basic structure of the game.

The external user moves first in this case. She sends the signal which is received by the fake avatar,
which then analyses the interaction and takes the best decision resulting from the analysis.
There are costs and gains attached to every action and the payoffs corresponding to each combination
of actions. The solution of the game consists of two equilibria depending on the particular values of the
prior proportion of the population of normal user, and belief of the defender about the same.


Conclusion:
There are some traditionally used methods for detection of a potential intrusion called intrusion detection
systems (IDS). But these methods are not able to attain a high level of accuracy. So some malware stays
inside the system without being detected and it takes a long time to realise that the system is
compromised. The above mentioned model can be inserted inside the IDS to increase the accuracy
level and direct the compromised system to the fake environment or data. This technique is required
to be applied to the traffic that is specified to be non-malicious by IDS.



References

1. Amin Mohammadi, Mohammad Hossein Manshaei ,Monireh Mohebbi Moghaddam, and
Quanyan Zhu.: A Game-Theoretic Analysis of Deception over Social Networks Using Fake Avatars.
Decision and game theory for security, 7th international conference, 382-394(2016)

2. Shen, S., Li, Y., Xu, H., Cao, Q.: Signaling game based strategy of intrusion detection in wireless
sensor networks. Comput. Math. Appl. 62(6), 2404–2416 (2011)

3. Ahmad, A., Maynard, S.B., Park, S.: Information security strategies: towards an organizational
multi-strategy perspective. J. Intell. Manuf. 25(2), 357–370 (2014)

4. Carroll, T.E., Grosu, D.: A game theoretic investigation of deception in network security. Secur.
Commun. Netw. 4(10), 1162–1172 (2011)

5. Almeshekah, M.H., Spafford, E.H.: Planning and integrating deception into computer security
defenses. In: Proceedings of the 2014 workshop on New Security Paradigms Workshop, pp. 127–138.
ACM (2014)

6. Zarras, A.: The art of false alarms in the game of deception: leveraging fake honeypots for enhanced
security. In: 2014 International Carnahan Conference onSecurity Technology (ICCST), pp. 1–6. IEEE (2014)

7. Wang, W., Bickford, J., Murynets, I., Subbaraman, R., Forte, A.G., Singaraju, G., et al.:
Detecting targeted attacks by multilayer deception. J. Cyber Secur. Mob. 2(2), 175–199 (2013)

8. Costarella, C., Chung, S., Endicott-Popovsky, B., Dittrich, D.: Hardening Honeynets against
Honeypot-aware Botnet Attacks. University of Washington, US
(2013) 394 A. Mohammadi et al.

9. Zhu, Q., Clark, A., Poovendran, R., Basar, T.: Deployment and exploitation of deceptive
honeybots in social networks. In: Conference on Decision and Control. IEEE (2013)

10. Clark, A., Zhu, Q., Poovendran, R., Ba¸sar, T.: Deceptive routing in relay networks. In:
Grossklags, J., Walrand, J. (eds.) GameSec 2012. LNCS, vol. 7638, pp. 171–185. Springer,
Heidelberg (2012). doi:10.1007/978-3-642-34266-0 10

11. Zhu, Q., Clark, A., Poovendran, R., Basar, T.: Deceptive routing games. In: IEEE 51st
Conference on Decision and Control (CDC), pp. 2704–2711. IEEE (2012)

12. L’Huillier, G., Weber, R., Figueroa, N.: Online phishing classification using adversarial data
mining and signaling games. In: Proceedings of the ACM SIGKDD Workshop on CyberSecurity
and Intelligence Informatics, pp. 33–42. ACM (2009)

13. Ibrahimi, K., Altman, E., Haddad, M.: Signaling game-based approach to power control management
in wireless networks. In: Proceedings of Performance monitoring and measurement of heterogeneous
wireless and wired networks, pp. 139–144. ACM (2013)

14. Casey, W., Morales, J.A., Nguyen, T., Spring, J., Weaver, R., Wright, E., Metcalf, L., Mishra, B.:
Cyber security via signaling games: toward a science of cyber security. In: Natarajan, R. (ed.) ICDCIT
2014. LNCS, vol. 8337, pp. 34–42. Springer, Heidelberg (2014). doi:10.1007/978-3-319-04483-5 4

15. Rahman, M.A., Manshaei, M.H., Al-Shaer, E.: A game-theoretic approach for deceiving remote
operating system fingerprinting. In: 2013 IEEE Conference on Communications and Network Security
(CNS), pp. 73–81. IEEE (2013)

16. Pawlick, J., Farhang, S., Zhu, Q.: Flip the cloud: cyber-physical signaling games in the presence
of advanced persistent threats. In: Khouzani, M.H.R., Panaousis, E., Theodorakopoulos, G. (eds.)
GameSec 2015. LNCS, vol. 9406, pp. 289–308. Springer, Heidelberg (2015).
doi:10.1007/978-3-319-25594-1 16

17. Mohebbi Moghaddam, M., Manshaei, M.H., Zhu, Q.: To trust or not: a security signaling game
between service provider and client. In: Khouzani, M.H.R., Panaousis, E., Theodorakopoulos, G.
(eds.) GameSec 2015. LNCS, vol. 9406, pp. 322–333. Springer, Heidelberg (2015).
doi:10.1007/978-3-319-25594-1 18

18. Pawlick, J., Zhu, Q.: Deception by design: evidence-based signaling games for network defense.
arXiv preprint arXiv:1503.05458 (2015)

19. Patcha, A., Park, J.M.: A game theoretic formulation for intrusion detection in mobile ad hoc
networks. IJ Netw. Secur. 2(2), 131–137 (2006)

20. Estiri, M., Khademzadeh, A.: A theoretical signaling game model for intrusion detection in
wireless sensor networks. In: 2010 14th International Telecommunications Network Strategy
and Planning Symposium (NETWORKS), pp. 1–6. IEEE (2010)

21. Liu, Y., Comaniciu, C., Man, H.: A bayesian game approach for intrusion detection in wireless
ad hoc networks. In: Workshop on Game theory for communications and networks. ACM (2006)

22. Lin, J., Liu, P., Jing, J.: Using signaling games to model the multi-step attack defense scenarios
on confidentiality. In: Grossklags, J., Walrand, J. (eds.) GameSec 2012. LNCS, vol. 7638, pp.
118–137. Springer, Heidelberg (2012). doi:10.1007/978-3-642-34266-0 7

23. Shoham, Y., Leyton-Brown, K.: Multiagent Systems: Algorithmic, Game-theoretic, and
Logical Foundations. Cambridge University Press, Cambridge (2008)

24. Gibbons, R.: Game Theory for Applied Economists. Princeton University Press, Princeton (1992)

25. Virvilis, N., Serrano, O.S., Vanautgaerden, B.: Changing the game: the art of deceiving
sophisticated attackers. In: 6th International Conference On Cyber Conflict (CyCon 2014), pp. 87–97. IEEE (2014)