The Microsoft Security Insights Show
The Microsoft Security Insights Show
The Microsoft Security Insights Show Episode 249 - Femke Cornelissen
0:00
Current time: 0:00 / Total time: -48:20
-48:20

The Microsoft Security Insights Show Episode 249 - Femke Cornelissen

An apple a day keeps the hackers away.

Hey! Hey! Hey! MSI Pod-Show Family

We are switching up the live show time today to broadcast the show at 10:00am EST. Today our awesome guest is Femke Cornelissen. Femke founded Dutch Women in Tech, an initiative that empowers women to pursue careers in IT, and co-organize the Women in Cyber program, promoting diversity in cybersecurity. Through her work with Experts Live Netherlands and global tech events, I help create opportunities for professionals to connect and thrive.

Tech Links:

Show Notes - Femke Cornelissen

a. https://www.linkedin.com/in/femcornelissen/

b. https://linktr.ee/Femcornelissen

c. https://teamcopilot.nl/team-copilot/

d. https://femkecornelissen.com/

Slowing down AI in your enterprise:

If you're a Microsoft Defender stack customer and you're struggling to handle Ungoverned AI Tools like Deepseek or Chatgpt, here are some things you can do about it using various technology across the Microsoft security stack:

1) Hunt using the following KQL query (https://lnkd.in/exHTT6ks), decide what is sanctioned from any hits you find. Afterwards Upload the Bulk IOC list to MDE (https://lnkd.in/ekS4JZsG ), removing any lines in the CSV for tools you sanction across the org. [Ensure Network protection + Custom indicators is on + smartscreen forced]

2) Defender for Cloud Apps MDA) app discovery to unsanctioned new Gen AI (https://lnkd.in/eShZsb54 ). If you're an E5 Customer you can also enable this setting to enforce MDA Unsanctions back to MDE, automatically blocking new GenAI apps as they are discovered. (https://lnkd.in/e5BK_ME6). Blocked by default until allowed should be the norm with AI tools IMO.

3) Endpoint DLP to block copy paste of Sensitivity Labels/Sensitive Info Types (SITs) into AI tools (Check out the video on: https://lnkd.in/emE2zwVq ). Also in Purview check out DPSM for AI recommendation and deploy the "Fortify Your Data Security: Data security for AI" policy which can block elevated Insider risk users from pasting or uploading sensitive info on AI sites. You may want to edit this policy after it has been deployed to tailor it to your organization (the video demonstrates just this but the policy uses an older name - we all love a good name change). Notably, it deploys in "block with override" mode. [Also note Insider Risk is another preq, I would check out Ewelina Paczkowska's Guide on Insider Risk here: https://lnkd.in/eWSF2kRJ]

Also MDA Session Proxy also has abilities to block copy paste (https://lnkd.in/e9EcX4yZ) if you need protection on devices not onboarded onto Purview/MDE.

4) Global Secure Access has a Web content filtering Policy for Artificial intelligence under the liability category (though annoyingly MDE Web content filtering does not have this category). A good blog comparing the Web Content Filtering for both MDE and GSA can be found here: https://lnkd.in/euNYjDpP by Kenneth van Surksum.

5) Enabling "Block other LLM chatbots" in Microsoft Edge For Business (i.e. cloud based Edge Management) will add a blocklist for some LLMs under "URLBlocklist" policy, however this control is quite lackluster and only contains 11 URLs. Its also more likely you manage Edge on a Platform level. For more on Edge For Business, see: https://lnkd.in/eCrYhMaA

Additionally blocking Browser Extensions, Office Add-ins, Team Apps etc. as these can be a source of AI tool leakage also. Blocking . ai TLD in Intune Firewall is another option however legitimate businesses may use this TLD.

(Arguably another could be purchasing & deploying copilot just to deter the need of a user to leverage another AI tool, it might actually make sense vs. the cost of a data leak ...)

Watch the live replay

Discussion about this episode