As a member of the First Five Consortium, we develop Artificial Intelligence (AI) enabled analytics tools and platforms that can mitigate impacts of natural disasters. Our fraud analytics tool can detect and prevent fraudulent claims made to the FEMA public assistance (PA). Find out how our Responsible AI framework reinforces the principles of ethical and responsible AI.
What is Responsible AI?
Our Artificial Intelligence (AI) governance group reinforces the principles of ethical and responsible AI by ensuring that all machine learning and AI models being developed by the various project groups should be comprehensive, explainable, ethical, and efficient.
Comprehensiveness: The AI model has a clearly defined testing and governance criteria
Explainability: The purpose, rationale and decision-making process of the AI model can be understood by the average end user
Ethical: The AI initiative has processes in place to seek out and eliminate bias in ML models
Efficient: The AI model can run continually and respond quickly to changes in the operational environment.
Pre-design stage
At the pre-design stage, our AI governance group evaluates each project by scrutinizing the problem documentation that includes:
- Business context of the research problem undertaken
- Business justification for the algorithm to be developed
- Model parameters used for tuning the model to maximize its performance without overfitting or creating high variance
- Feature choices (inputs) and definitions (outputs)
- Any customizations to the algorithm if it was reused
- Instructions for reproducing the model
- Examples for training the algorithms and datasets used
- Examples for making predictions from the algorithm
After successful evaluation, the projects are developed within the guidelines of Responsible AI as below:
- Shared code repositories: Shared code repositories facilitate efficiency by eliminating rework and reducing the processing overheads of the compute platform. Our developers reuse existing models/algorithms as stepping stones to further their research on solving newer problems.
- Approved model architectures: New model architectures should be approved by our AI governance group which evaluates them on explainability and interpretability. This is an important factor to eliminate issues related to fairness, bias, transparency and accountability.
- Sanctioned variables: Datasets made available for research should not contain any personally identifiable information (PII) directly or indirectly. Each dataset should be tagged with its summary statistics indicating the distribution of values, to eliminate bias.
- Established bias testing methodologies to uphold fairness, civil rights, gender equity in the models created for AI systems.
- Stability standards for active machine learning models to make sure AI programming works as intended and does cause memory leaks and performance bottlenecks for the platform.
Implementing Responsible AI
The most important catalyst for solid governance for implementing Responsible AI is model validation and reproducibility. Model validation is the process of ensuring that the AI model is performant, statistically sound, delivers statistically significant benefits and meets the definition of “success” put forward by the AI project.
Our developers group their model by project. Each attempt to train a model for that project is call a “run,” with all the runs for that project being rolled up into an “experiment.” Putting forth a simple metadata framework centered on the concept of an experiment yields increased visibility and auditability for any AI project. Metadata necessary to reproduce an experiment or a run of an experiment:
- Type of algorithm used for the development of the model
- Features and transformations used in the model
- Data snapshot or identify the data set used
- Model tuning parameters
- Model performance metrics
- Verifiable code location from source control management
- Training environment setup used for model training
To test the validity of the models, we test our models on these behaviors:
- It achieves acceptable statistical performance for a sensible offline metric (accuracy)
- It achieves a statistically significant improvement when compared to a control on some online metric or key performance indicator (KPI) (Clicks, conversions, purchases)
- It is statistically sound, there is no data leakage, and the supervised ML problem was framed correctly.
- The performance of the model can be successfully explained based on available features.
Conclusion
We at Niyam IT believe that governance should not be a blocker to innovation. A balance between governance and enablement must be established. AI initiatives need to be given autonomy for exploration and experimentation. In return, they should be developed within the Responsible AI framework.
-Shweta Katre
Copyright ©2021, Niyam IT Inc – All rights reserved