FS-ISAC (Financial Services Information Sharing and Analysis Centre), a consortium committed to mitigating cyber risk in the global financial system, has released six white papers aimed at enhancing understanding of artificial intelligence (AI) within financial services institutions.
“While AI promises breakthroughs in the financial services industry, there are a plethora of risk factors that the sector needs to be aware of, both when integrating AI into internal processes as well as building cyber defenses against threat actors utilizing AI tools,” said Michael Silverman, vice president of Strategy and Innovation, FS-ISAC. “It is integral to operational safety and the very foundation of trust in the financial services industry that the sector aligns on how to counteract the risks that AI poses.”
The six white papers offer comprehensive insights into AI-related threats and risks, providing adaptable frameworks and strategies tailored to the diverse needs and risk appetites of financial institutions:
READ:
Report: The cyber threats that matter to financial institutions
Report: Financial orgs concern about increasing cyber-attack on payment systems
1. Adversarial AI Frameworks: Taxonomy, Threat Landscape, and Control Frameworks: Identifies and categorizes threats linked with AI, detailing potential attacks and vulnerabilities while proposing security controls to mitigate these risks.
2. Building AI into Cyber Defenses: Explores the strategic integration of AI in cybersecurity and risk management within financial services, highlighting key considerations and use cases.
3. Combating Threats and Reducing Risks Posed by AI: Outlines mitigation strategies to address both external and internal cyber threats and risks associated with AI.
4. Responsible AI Principles: Examines ethical principles and practices crucial for the ethical development and deployment of AI under industry standards.
5. Generative AI Vendor Evaluation and Qualitative Risk Assessment: Offers a customizable tool to aid financial organizations in evaluating and selecting generative AI vendors while managing associated risks effectively.
6. Framework of Acceptable Use Policy for External Generative AI: Provides guidelines for developing acceptable use policies when incorporating external generative AI into security programs.
These papers represent a pioneering effort within the industry, offering customized standards and guidance for navigating the complexities of AI implementation. They draw upon the insights of government agencies, standards bodies, academic researchers, and financial partners such as the Financial Services Sector Coordinating Council (FSSCC), Bank Policy Institute (BPI)/(BITS), and the National Institute of Standards and Technology (NIST).