top of page
Writer's pictureBryan Bird

AI and Cybersecurity: Why an algorithm won’t defend your data

Security, in an ever more digital world, has become a central concern for any individual. The near daily reports of data breaches ranging from embarrassing material from a social media account all the way up societal upheaval, have left an air of paranoia around IT experts and average users across the globe.


This looming threat has been brought home for organisations across the world, both literally and figuratively, in the wake of the COVID-19 Pandemic. The magnification of security risks created by such a breakneck transition to remote working has created the perfect environment for cyber threats to flourish.


Recent attacks, such as the Colonial Pipeline cyber-attack in the US this summer, have highlighted the fragility of the systems that we rely on. In the aftermath of these attacks, cybersecurity and all its associated tools having emerged as essential for all. But how can this field help you as an individual protect your data?


What even is “Cybersecurity”?


‘Cybersecurity’ can be an incredibly ambiguous term for those not working closely with the industry. The term conjures up images of Matrix-style, green text flashing screens, with a furious amount of typing and jargon. Even for those in IT who do not work directly in security, cybersecurity can be seen as an annoying afterthought of the development process. What the majority of people agree on is that cybersecurity is a complicated field that requires broad technical knowledge to understand its problems and implement its solutions.


A technical definition of cybersecurity would be something like the following: "Cybersecurity is the organization and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems from occurrences that misalign de jure from de facto property rights." (Craigen, Diakun-thibault, & Purse, 2014). While this does an excellent job of encapsulating the field and its aims, it has just enough jargon to confuse anyone. So how can we break down this concept into plain text?


A simple and practical understanding of cybersecurity can be seen as the process of attacking or defending computer networks. Predominately, cybersecurity relates to the restriction of access to networked devices, while hacking is unauthorised access to these systems and their accompanying data.


In industry, the categories of actors are broadly broken down into the Blue (Defenders) and Red (Attackers) teams. The blue team seeks to harden systems against attack and protect communications. They do so by utilising network monitoring tools, data encryption and imposing strict rules on account access. The Red team looks to penetrate these defences by listening to network traffic to find information, gaining unauthorised credentials to access these systems or exploiting bugs in software that allow them to access data or limit services.

The complexity of the field comes from the arms race of ever-increasing limitations of points of access to these systems and those that want to gain unauthorized access through the exploitation of vulnerabilities.


More recently, elements such as Zero-day attacks, the utilisation of bugs that have been known about by a software’s creator for zero days, or the leveraging of IOT devices, such as smart home devices or any such piece of technology that has an internet connection, to perform attacks. These changes have led to an increasingly dynamic environment, with constantly shifting threat vectors.


AI’s place in Cybersecurity


As with a significant number of technical and societal problems, many have pointed to artificial intelligence (AI) and its implementation in cybersecurity as the saving grace we need. Recent reports have found that 61% of experts acknowledge that AI is critical in threat detection (Tolido et Al, 2019).


The automation of system monitoring, real-time malware and virus analyses, to name but a few, have all benefitted greatly from AI and machine learning in the area of threat detection. These benefits can be clearly seen in the effectiveness of even simple consumer-grade antivirus, such as windows defender which has improved greatly at threat detection in recent years through the leveraging of machine learning.


Furthermore, businesses are increasingly leaning on AI for threat prediction to limit and reduce attack vectors for these threats. A key area where AI can play role in predicting threats is Zero-Day attacks. As mentioned previously, these attacks are the result of bugs in programs or software patches which are currently unknown by the creators or users of the software. These vulnerabilities are researched and sold by Zero-day brokers to the highest bidders, often state actors, who look to utilise them in pursuit of their own goals.


Better AI Systems could have a two-fold benefit in preventing these attacks. Firstly, they could be used to better automate testing and utilise dynamic analysis to better detect more sophisticated vulnerabilities in systems and networks when conducting penetration testing. Secondly, behaviour-based detection could model the interactions that malware or vulnerabilities may have on a system. Rather than actively scanning all incoming files or traffic to identify potential threats, Behaviour-based detection leverages machine learning to predict the interaction that certain malware may have on a system. This allows the behaviour-based system to become more effective over time at predicting potential vulnerabilities in a system (Gale, Mahdy & Atiea, 2016).


Viewing AI form this lens, it is easy to see why organisations are scrambling to implement these solutions; so much so that global cybersecurity is expected to exceed $1.7 trillion by 2025. This spending and the integration of cybersecurity will have a beneficial impact on defence. Utilisation of AI and machine can be one of the most valuable tools for a cybersecurity specialist or IT Administrator. However, this seemingly invulnerable web of protection that AI and machine learning can spin has a loose thread: you.


Good practice vs. good coding


In the wake of such technical terminology, how can you as an individual be seen as a such a threat and what can you be expected to do? Quite simply, following good practices can go a long way to prevent many of the attacks that any system may encounter.

The previously mentioned attack on Colonial Pipeline in the US this summer is a prime example. This breach was caused by hackers gaining access to a former employee’s account that had not been deactivated after they had left the organisation. This access was gained through the simple use of the account’s login details, as the password that the individual had used was also used for another account which had previously been compromised (Kelly & Resnick-ault, 2021). This attack resulted in the payment of $4.4 million in ransomware, the loss of 100 gb of sensitive data and economic panic across the US due to the potential for fuel shortages. All because of single password.


The user at the keyboard poses one of the greatest security threats to any organisation and its data. User error is major source of security failings; this can vary from mistakes in coding practices by individual developers. Examples include an administrator misconfiguring a network or granting the incorrect privileges to a user account, or an end-user sending their account details in response to a phishing email.


The Open Web Application Security Project or OWASP Top Ten, a listing of the current top threats in cybersecurity have further acknowledged the threat presented by the human factor in Application security risks (OWASP, 2021). Broken access control, often relating to unauthorised, elevation of user privileges to gain access to systems and their data takes the top ranking. Furthermore, an entirely new category titled, Insecure Design, was added at the 4th spot, highlighting the significant number of security risks that occur due to poor human design of applications.


The utilisation of AI in cybersecurity has significant benefits but as can been seen by these examples, this benefit needs to be taken with a grain of salt. As long as humans are involved in the design and utilisation of systems there will be no fool proof defence against attack, no matter how sophisticated of an algorithm that is employed.


To better prevent and recover from cyber-attacks, security most become more part of the everyday usage of technology. Developers need to move security to beginning of the development life-cycle and implement secure practices as a priority. Administrators should look to constantly patch systems and update services to prevent vulnerabilities, implement redundancy into networks, and ensure that correct privileges are given to users. End-users, such as you, need to educate themselves on best practices when interacting with these systems and networks (and stop clicking on those pesky phishing emails).


As boring as it might sound, good administration trumps good artificial intelligence in cybersecurity.






 

References:


Braue, D. (2021). Global Cybersecurity Spending To Exceed $1.75 Trillion From 2021-2025. Cybercrime Magazine. Retrieved from: https://cybersecurityventures.com/cybersecurity-spending-2021-2025/


Collier, N. (2021). Can You Rely on Windows Defender in 2021?Retrieved from: https://bestantiviruspro.org/blog/is-windows-defender-good/


Craigen, D., Diakun-Thibault, N., & Purse, R.(2014). Defining Cybersecurity. Technology Innovation Management Review, 4(10): 13-21. http://doi.org/10.22215/timreview/835


Galal, H. S., Mahdy, Y. B., & Atiea, M. A. (2016). Behavior-based features model for malware detection. Journal of Computer Virology and Hacking Techniques, 12(2), 59-67.


Kelly, S., & Resnick-ault, J. (2021). One password allowed hackers to disrupt Colonial Pipeline, CEO tells senators. Reuters. Retrieved from: https://www.reuters.com/business/colonial-pipeline-ceo-tells-senate-cyber-defenses-were-compromised-ahead-hack-2021-06-08/


OWASP (2021). OWASP Top Ten. Retrieve from: https://owasp.org/www-project-top-ten/


Tolido, R., Thieullent, A.L., Van der Linden, G., Frank, A., Delabarre, L., Buvat, J., Theisler, J., Cherian, S., & Khemka, Y. (2019). Reinventing Cybersecurity with Artificial Intelligence: The new frontier in digital security. Cpgemini Research Institute. Retrieved from: https://www.capgemini.com/gb-en/research/reinventing-cybersecurity-with-artificial-intelligence/

179 views0 comments

Comments


bottom of page