Responsible AI Development and Deployment Policy.

Updated Date: Dec 16th, 2024

1. Purpose and Scope
The National Research Institute for Democratized Learning (NRIDL) is committed to developing and deploying Artificial Intelligence (AI) technologies ethically, responsibly, and in a manner that respects human rights, privacy, and the broader public interest. This policy establishes governance frameworks, human oversight mechanisms, cybersecurity protections, and data stewardship principles for all AI-related initiatives undertaken by NRIDL.

This policy:

  • Defines measures to ensure compliance with relevant Canadian legislation, including Bill C-27 (which encompasses the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act), as well as the Personal Information Protection and Electronic Documents Act (PIPEDA).

  • Sets forth standards for human oversight and accountability, privacy and data governance, fairness and equity, safety, and cybersecurity.

  • Applies to all NRIDL AI projects, research activities, partnerships, contractors, vendors, and collaborators who work with or on behalf of NRIDL.

2. Alignment with Canadian Legislation and Standards
NRIDL’s AI initiatives will comply with Canadian privacy and AI-related regulatory frameworks, including:

  • Bill C-27:

    • Consumer Privacy Protection Act (CPPA): Ensures personal information is handled lawfully, with meaningful consent, data minimization, and transparency measures in place.

    • Personal Information and Data Protection Tribunal Act: Acknowledges the role of an independent tribunal in addressing complaints and ensuring accountability in data protection.

    • Artificial Intelligence and Data Act (AIDA): Guides the responsible design, development, and deployment of AI, especially for high-impact systems.

  • PIPEDA: Continues to govern the fair handling of personal data, ensuring adherence to its principles such as accountability, identifying purposes, consent, and limiting use, disclosure, and retention.

3. Human Oversight and Accountability

  • Governance Structure:
    NRIDL will establish an AI Ethics and Compliance Board (AIECB) comprising cross-functional members (legal, data protection, engineering, policy, ethics, community representatives). The AIECB will:

    • Review AI projects at key lifecycle stages (design, testing, deployment, post-deployment monitoring).

    • Oversee ethical impact assessments and privacy impact assessments to ensure compliance with CPPA, AIDA, and PIPEDA.

    • Provide guidance on risk management, address stakeholder concerns, and resolve internal disputes related to AI ethics.

  • Responsibility Assignments:
    NRIDL will designate accountable roles for each project phase:

    • Project Lead: Ensures compliance with this policy and oversees day-to-day activities.

    • Data Privacy Officer: Validates that data handling aligns with CPPA, PIPEDA, and internal privacy frameworks.

    • Security Officer: Ensures cybersecurity standards and practices are upheld.

    • Legal Counsel: Confirms compliance with applicable legislation and regulatory standards.

  • Human-in-the-Loop (HITL):
    For high-risk AI applications (e.g., those impacting individual rights, wellbeing, or educational opportunities), outputs will be subject to human review before final decisions are implemented. This ensures a fail-safe check and accountability to prevent automated decision-making that could produce unjust outcomes.

  • Transparent Documentation:
    NRIDL will maintain documentation detailing the purpose, design choices, data sources, model architectures, and validation methodologies for each AI system. This supports audits, external reviews, and legal compliance.

4. Cybersecurity Measures

  • Data Security and Encryption:
    All data used for AI training and inference, especially personal or sensitive data, will be encrypted both at rest (e.g., AES-256) and in transit (e.g., TLS/SSL). Access to data will be restricted to authorized personnel with a legitimate operational need.

  • Identity and Access Management (IAM):
    NRIDL will implement strict access controls, enforce multifactor authentication (MFA), and adopt the principle of least privilege to prevent unauthorized access. Regular reviews of user accounts and permissions will be conducted.

  • Secure Development Lifecycle (SSDLC):
    Throughout the AI development process, security best practices will be integrated, including code reviews, automated scanning for vulnerabilities, and adherence to secure coding standards. AI components will be regularly patched and updated.

  • Penetration Testing and Vulnerability Assessments:
    NRIDL’s IT security team or appointed third parties will conduct periodic penetration tests and vulnerability assessments on AI systems and associated infrastructure. Findings will be addressed promptly, and remediation steps documented.

  • Incident Response and Breach Notification:
    In the event of a security incident, NRIDL will follow a formal incident response plan, aiming for rapid containment, eradication of threats, and system recovery. Any reportable breaches will be disclosed to affected parties and relevant authorities in accordance with CPPA and PIPEDA notification requirements.

5. Data Privacy and Governance

  • Data Minimization and Purpose Limitation:
    NRIDL will collect and use only the minimum amount of personal data necessary for AI training and operations. Purpose specification is a guiding principle: personal information will only be used for the reasons stated at the time of collection, as required by CPPA and PIPEDA.

  • Consent and Transparency:
    Where personal data is involved, individuals will be informed about data usage in AI systems, including the intended purposes, potential risks, and safeguards. NRIDL will provide clear, accessible privacy notices and obtain meaningful consent where required.

  • Data Stewardship and Integrity:
    Data stewards will ensure that datasets are accurate, relevant, and current. Regular data quality checks and updates, alongside strict version control and logging, guarantee data integrity.

6. Fairness and Equity

  • Bias Mitigation Measures:
    NRIDL will actively test for and mitigate biases within AI models by:

    • Employing diverse and representative training datasets.

    • Applying fairness metrics (e.g., demographic parity, equalized odds) and conducting bias audits.

    • Using debiasing techniques (e.g., re-weighting, adversarial de-biasing) to minimize disparate impacts on marginalized communities.

  • Stakeholder Engagement and Inclusivity:
    NRIDL will engage external experts, community organizations, and individuals potentially affected by AI systems. Feedback from these engagements will inform model design, feature selection, and evaluation criteria to support equitable outcomes.

  • Continuous Improvement:
    NRIDL will continuously monitor AI outcomes and user feedback to identify new sources of bias or inequity. The AIECB will recommend adjustments, retraining, or alternative methods if disparities persist.

7. Safety and Reliability

  • Robust Testing and Validation:
    Prior to deployment, AI systems will undergo rigorous testing, including adversarial testing, stress testing, and scenario-based evaluations. Safety benchmarks will be established, and fail-safes built into critical systems to handle malfunctions or unexpected behaviors gracefully.

  • Explainability and Interpretability:
    NRIDL will strive to use explainable AI methods, especially for high-stakes decisions affecting learners, educators, or the public. Clear explanations of AI-driven outcomes will foster trust and make it easier to identify and correct errors.

  • Monitoring and Maintenance:
    Post-deployment, AI systems will be monitored to detect performance degradation, anomalies, or unsafe outputs. Maintenance protocols, including periodic model updates and retraining, will ensure ongoing reliability and adherence to evolving legal and ethical standards.

8. Compliance, Audits, and Certification

  • Training and Education:
    NRIDL will provide regular training for all staff involved in AI development and management, ensuring familiarity with legal requirements under Bill C-27, PIPEDA, and internal ethical standards.

  • Internal Audits:
    Periodic internal audits will evaluate compliance with this policy, as well as the effectiveness of data protection, cybersecurity measures, and fairness mechanisms. Audit results will inform continuous improvements.

  • External Reviews and Certifications:
    NRIDL may engage accredited third parties to conduct external audits or seek relevant certifications (e.g., ISO/IEC 27001 for information security, ISO/IEC 27701 for privacy information management). Such certifications and reviews enhance transparency and trust with stakeholders.

9. Policy Review and Updates
This policy will be reviewed at least annually, or more frequently as needed, to adapt to technological changes, emerging best practices, and evolving legal requirements (including updates to Bill C-27 and PIPEDA). Amendments will be communicated promptly to all relevant parties.

10. Public Accountability and Transparency
NRIDL will publish high-level summaries of its Responsible AI practices, impact assessments, and key metrics to inform the public, stakeholders, and regulators. This transparency aligns with NRIDL’s mission to democratize learning and foster public trust in AI-driven innovations.