#205 Building AI Resilience with Cloud Direct
The ISO Show - A podcast by Blackmores UK

Categories:
AI usage has skyrocketed in the past 2 years, with many commonplace apps and software now featuring an AI integration in some form. With the rapid development and possibilities unlocked with this powerful technology, it can be tempting to go full steam ahead with implementing AI use into your day-to-day business activities. However, new technologies come with new risks that need to be understood and mitigated before any potential incidents. In this episode Mark Philip, Information Security Manager at Cloud Direct, joins Ian to discuss emerging AI risks and how you can build AI resilience into your existing practices. You’ll learn · Who is Mark? · Who is Cloud Direct? · How can you assess your current level of AI resilience? · What are some of the key threats that AI systems currently face, and how can you mitigate these? · How can you utilise AI to enhance your security? · What is best practice when responding to an AI related security incident? Resources · Cloud Direct · Isologyhub In this episode, we talk about: [02:05] Episode Summary – We invite Cloud Direct’s Information Security Manager, Mark Philip, onto the show to discuss AI risks and how to build in AI resilience into your existing security practices. [03:25] Who is Mark Philip?: While his primary role is as an Information Security Manager at Cloud Direct, a little known fact about him is that he is an amateur triathlete! At London earlier in 2024, he was lucky enough to bump into Alistair Brownlee, who is the UK’s two time gold olympic medalist in triathlon. [05:10] Who are Cloud Direct? – Founded in 2003, Cloud Direct are a Microsoft Azure expert MSP that is the top of Microsoft accreditation that any partner can hold, putting them in the top 5% of Microsoft partners globally. They offer consultancy and professional managed services, specialising in Microsoft Cloud, which is all underpinned with security across the whole Microsoft stack. They also assist with digital transformation and modernisation. [06:30] Assessing the current AI risk landscape: Ian points out that a recent report from the Capgemini Research Institute found that 97% or organisations are using generative AI. With this increase in AI use, there is a correlation with an increase in security incidents related to AI. Mark adds that this technology is so new, with a lot of larger software companies such as Microsoft pushing AI elements into their tools. So there is a learning curve involved with utilising the technology. There is also a lack of Risk Assessment being done in relation to AI, not a lot of though is going into the use of AI on a day-to-day basis. If you’re using an AI platform, you need to ask yourself: What is this platform actually doing with the data I’m inputting? There is also the fact that shady individuals are already leveraging this technology with the likes of deep fakes, bad bots and more sophisticated phishing schemes – and the harsh truth is that they’re going to get better at it over time. [08:20] What is AI resilience and why is it so important? – AI resilience is about equipping businesses with the processes that control the use and deployment of AI usage, so that they can anticipate and mitigate any AI risks effectively. Similar to ISO Standards, this would involve a risk-based approach. However, this will look very different depending on your business and how you are using AI. For example, the risks of someone using AI to generate a transcript of meeting notes will be much lower in comparison to a healthcare company using complex sets of data with AI to synthesize new medicines. So, if you are using AI you need to consider what the inherent risks could be, and that would be dependent on the data you’re processing i.e. is it sensitive data? And then factor in if the software is publicly available (such as ChatGPT), or it is a closed model under your control? Asking these types of questions will give you a more realistic outlook on the risk landscape you face. [10:35] How can a business assess their current level of AI resilience? AI is here to stay, so you won’t be able to avoid if forever. So first, you need to embrace and understand it, and that includes creating a clear picture of your use cases. Mark states they did this exercise internally at Cloud Direct when they were starting to use Microsoft’s Co-Pilot. They asked themselves: · What sort of data is the software interacting with? · What data are we putting into it? · How do Microsoft manage the program and related security? · Are Mircrosoft storing any of that data? It’s not just about the security either, you need to understand why your using AI and if it will actually be to your benefit. A lot of people are using it because it’s new and shiny, but if it’s not actively helping you achieve your business goals, then it’s more of a distraction than anything else. For those looking for additional guidance on AI policies, risks and resilience, there’s a lot of guidance provided by both ISO and the NCSC. ISO 42001 in particular is useful for both people using AI and developers creating AI. If you’re stuck on where to start, a Gap Analysis is a fantastic tool to see where you are currently and what gaps you need to bridge in your security to cover any AI usage, and to see how well you are complying with current legal requirements (the EU AI Act is now in effect!). Another tool is a Risk Assessment. You may not process what many would consider sensitive data, such as healthcare information, but even if you store and hold customer data, then you need to ensure that any AI you use doesn’t pose a risk to it. [14:30] How can AI improve security and resilience? – Sticking with Microsoft as an example, as they are releasing a lot of AI driven tools, they can be used to fill gaps that humans may not have the time to do. Once example of this is monitoring and sending security alerts, previously a system may have just sent this to a human member of staff to resolve, but now AI security tools can act on those alerts on your behalf. So, if you have limited IT resources, this could be a fantastic addition to your security set-up. It also eliminates the lag of human response, and AI can look at things in a way a human wouldn’t think to. [17:55] How do people stay ahead of the curve in the evolving AI landscape? – You should be using the myriad of resources available to learn about AI, as there are webinars, social media feeds, blogs and videos released constantly. Microsoft in particular are offering a comprehensive feed of information relating to AI, the risks and new technologies in development. The key is to understand AI before integrating it into your business. Don’t just jump at the new shiny toys being advertised to you, go to reputable sources such as the ICO, NCSC, Cyber Essentials and regulatory bodies to learn about the technology, the benefits it can bring in addition to the risks you need to mitigate against. Mark can vouch for Microsoft’s though leadership in this field, as they keep all of their customers up-to-date with all of their AI related developments. Cloud Direct themselves are also putting out some great content, so don’t forget to check out their resources. If you are already utilising Microsoft’s tools, the Cloud Direct can help explain how their new tools can apply to your business. If you’re looking for assistance with ISO 42001, then Blackmores can help you with implementing a robust AI Management System. [21:40] What is best practice when responding to an AI related incident? – To be honest, there’s no reason to not treat it like any other security incident. We’ve already adapted to more sophisticated security risks as a result of the move towards home and hybrid working over the pandemic. This simply another stage along in this ever changing security landscape. You should treat it like assessing any new step, and you likely have all the processes in place for analysing risk already in place, simply apply them to the usage of AI and put in place the necessary governance based on your findings. Standards such as ISO 20000 IT Service Management and ISO 22301 Business Continuity are fantastic tools of you’re new to this sort of incident response planning. If you’ve already been certified to these standards, then you likely have the following in place already: · Risk Assessments · Business Impact Assessments · Business Continuity Plans · Recovery Plans Simply add AI as an additional risk factor into your existing management system and update the necessary documentation to include actions and considerations for its use. If you update your Business Continuity and recovery plans, then make sure to test them! Don’t just assume that they will work, put them to the test and adjust until you’re comfortable that in a real incident, everyone in the business knows how to react, what to communicate and how to get back up and running. [24:00] What are Mark’s predictions for the field of AI resilience? – People need to look at the opportunities in utilising AI, a lot of people are using it without really understanding it so there’s a lot of learning still to do. So, he expects to see a lot of businesses fully grasping how they can use AI to their advantage in the coming years. With that comes the challenge of ensuring it’s integrated safely, with the right governance embedded to ensure its safe and ethical usage across entire organisations. Another big challenge is the handling data privacy within AI. Scams are only going to get more complex as AI develops, and you need to ensure your business can protect against that as much as possible. Also businesses should carefully consider what AI platforms they choose to use. Ensure you understand what data is being input and stored, and the level of control you have over it. All of this to say, there are a lot of massive benefits of using AI and you should shy away from it. But, you need to ensure you are using it safely and ethically. [27:30] What is Mark’s book recommendation? – The hunt for Red October by Tom Clancy [28:45] What is Mark’s favorite quote? – “I have a bad feeling about this…” – Star Wars Want to learn more about Cloud Direct? Check out their website. We’d love to hear your views and comments about the ISO Show, here’s how: ● Share the ISO Show on Twitter or Linkedin ● Leave an honest review on iTunes or Soundcloud. Your ratings and reviews really help and we read each one. Subscribe to keep up-to-date with our latest episodes: Stitcher | Spotify | YouTube |iTunes | Soundcloud | Mailing List