CFF RSAIF: Responsible and Secure AI for Future - Working Group Charter
Introduction:
The Responsible and Secure AI Framework Working Group (the “Working Group”) is established to facilitate collaborative research and educational efforts in the field of responsible and secure artificial intelligence (AI) technologies. This charter outlines the mission, objectives, structure, and guidelines governing the participation in the Working Group.
Mission and Objectives:
The mission of the Working Group is to advance the understanding, development, and dissemination of responsible and secure AI practices. Our key objectives are as follows:
1. Promote research and knowledge sharing on responsible AI development, ethics, and security.
2. Provide educational resources and materials for AI professionals, researchers, and the broader community.
3. Foster collaboration and exchange of best practices among participants.
Structure:
The Working Group shall consist of the following components:
1. Steering Committee: Comprised of selected leaders in the field of AI ethics and security. Responsible for overall governance, strategy, and decision-making.
2. Research and Education Sub-Working Groups: Specialized groups focusing on specific topics within responsible and secure AI. These sub-working groups will conduct research, create educational content, and organize events.
3. Participants: Individuals and organizations interested in contributing to the mission of the Working Group by engaging in research, educational activities, and collaborative initiatives.
Participant Guidelines:
Participation in the Responsible and Secure AI Framework Working Group is open to individuals, academic institutions, non-profit organizations, and industry representatives who share our commitment to the research and educational nature of the engagement. To ensure a productive and inclusive environment, the following guidelines must be adhered to:
1. Non-Commercial Focus: Participation in the Working Group should be driven by the desire to contribute to research and education rather than sales or marketing purposes. Participants should refrain from using Working Group resources for commercial promotion.
2. Research and Education: Participants are encouraged to actively engage in research projects, knowledge sharing, and educational content creation related to responsible and secure AI. Contributions in the form of whitepapers, articles, presentations, or workshops are highly encouraged.
3. Collaboration and Knowledge Sharing: Participants should actively collaborate with others, share knowledge, and contribute to the collective learning within the Working Group. Open and constructive dialogue is essential.
4. Respect and Inclusivity: All participants should demonstrate respect for diverse perspectives and opinions. Discrimination, harassment, or exclusionary behavior will not be tolerated.
5. Confidentiality: Participants should respect any confidential information shared within the Working Group and adhere to any applicable intellectual property and privacy regulations.
6. Transparency: Participants are encouraged to openly disclose any conflicts of interest that may arise during the course of their participation.
Meetings and Communication:
Regular meetings, both virtual and in-person when possible, will be scheduled to facilitate collaboration and knowledge exchange. Communication will primarily occur through designated channels, and information sharing is encouraged.
Amendments to Charter
This charter may be amended with the consensus of the Steering Committee, taking into account the input and feedback of the Working Group participants.
Amendments to Charter
By participating in the Responsible and Secure AI Framework Working Group, members agree to abide by the principles and guidelines outlined in this charter. The Working Group is committed to promoting the responsible and secure development of AI technologies through research, education, and collaborative efforts.
Working Group Contact:
For inquiries, suggestions, or additional information, please contact cffprograms@rsaif.ai
Please add ATTN: Valmiki Mukherjee
Date of Establishment: JUN 21, 2023
Working Group
Core Working Group:

Amar Kanagaraj
Protecto

Amy De Salvatore
NightDragon

Andrea Bonime-Blanc
GEC Risk

Anupam Gupta
Architect and Advisor

Aruneesh Salhotra
SNM Consulting

Avani Desai
Schellman

Brian Levine
EY

Danny Manimbo
Schellman

David Guffrey
Medigate by Claroty

Demetrius Comes
Sovrn

Deepak Seth
Texas Christian University

Diana Kelley
Protect AI

Douglas Jensen
Bismark State College

Eman El-Sheikh
UWF Center for Cybersecurity

Errol Weiss
Health ISAC

Gordon Pelosse
Ex-CompTIA

Hannes Hapke
Digits

Heather Kadavy
Third Party Risk Association

Jason Christman
PulseLogic

Jen Vasquez
Evernorth

Joyce Rancani
CVS Health

Julie Gaiaschi
Third Party Risk Association

Katherine Thompson
Cyber Future Foundation

Kallol Bhattacharya
Contributor

Kay Firth-Butterfield
Good Tech Advisory

Kuljit Bhogal
Osler, Hoskin & Harcourt LLP

Mark Orsi
Global Resilience Federation

Martin Stanley
NIST

Megha Sai Sree Pinaka
UT Dallas

Meena Martin
GSK

Nagaraju Chayapathi
Accenture AI

Nick Shevelyov
vCSO

Pamela Gupta
TrustedAI

Pamela Isom
IssAdvice & Consulting LLC

Paul Stapleton
Dexcom

Protik Mukhopadhyay
Protecto

Rahat Sethi
Adobe

Ram Dantu
University of North Texas

Robert Kolasky
Exiger

Sarah Kuranda
NightDragon

Sriram Puthucode
Balbix

Sounil Yu
Cyber Defense Matrix

Tejas Shroff
UT Dallas

Tom Bendien
GT Cyber Labs

Ty Greenhalgh
Medigate by Claroty

Upendra Mardikar
TIAA

Valmiki Mukherjee
CFF

Varshith Dondapati
Softility

Vishwas Manral
Precize Inc
QA & Review

Chris Strahorn
Rise Health

Ira Winkler
CYE

Mihaela Ulieru
IMPACT Institute

Olivia Rose
Rose CISO Group

Serge Christiaans
Sopra Steria

Stephen Singh
ZScaler

Vijay Bolina
Google DeepMind
Advisors & Observers

Frincy Clement
Women in AI

Justin Greis
McKinsey

Malcolm Harkins
HiddenLayer

Nitin Natarajan
CISA

Pramod Gosavi
11.2 Capital
