Responsible AI Policy.
Updated Date: Dec 24th, 2024
1. Introduction
The National Research Institute for Democratized Learning (“NRIDL,” “we,” “us,” or “our”) is a nonprofit organization dedicated to making AI and digital technologies accessible to all, thereby fostering equitable education and bridging the digital divide. Our AI solutions and services (“AI Products”) are designed to empower communities, educators, small businesses, and learners with responsibly developed, transparent, and mission-aligned tools. This Policy outlines our approach to developing, deploying, and using AI technologies ethically and in line with our core values.
2. Scope & Purpose
This AI Policy applies to all AI-driven features, tools, and initiatives provided or supported by NRIDL through our websites, platforms, or partnerships (collectively, the “Services”). It governs how we design, use, and manage AI to ensure that our solutions are developed and employed responsibly, transparently, and for the public good.
You may encounter additional offerings or integrations from third parties via our platforms or programs. Those services are not covered by this Policy and remain subject to the terms and conditions of their respective providers.
3. Our Guiding Principles
Democratized Access & Equity
We strive to ensure that our AI Products promote access for underserved groups, bridging socio-economic, geographical, and cultural divides.Transparency & Trust
We prioritize clear communication about when and how AI is used, ensuring that stakeholders—learners, educators, community partners—understand the nature of AI-generated outputs and potential limitations.Human-Centered Design & Ethics
We embed ethical considerations in every stage of AI development and deployment, placing human well-being, dignity, and autonomy at the forefront.Data Privacy & Security
We uphold strict policies to protect user data. We aim to use personal information only as necessary to enhance AI capabilities responsibly, while maintaining confidentiality and security.Accountability & Continuous Improvement
We regularly assess and refine our AI Products, striving to minimize unintended harm, reduce biases, and respond proactively to community feedback.
4. AI Products and Your Data
4.1 Types of AI Products
Our AI Products may include:
Generative AI Tools: Automated assistants that provide advice, draft text, develop lesson plans, or perform other generative tasks to support educators, small businesses, and community projects.
Analytics & Insights: Machine-learning models that analyze user-generated data to offer insights on learning outcomes, operational efficiency, or community engagement.
Personalized Learning Platforms: Adaptive educational technologies that tailor content to a learner’s progress and needs.
4.2 Data Collection & Use
User Input: When you interact with our AI Products—e.g., input text, prompts, questions, or other content—we may use this data to generate outputs and improve model performance.
Aggregated & Anonymized Data: We may collect and aggregate user interactions for internal research and to refine our AI Products. All personally identifiable information (“PII”) is removed or anonymized when used for training or analysis.
Training Models & Improving Services: NRIDL may use your inputs to enhance the accuracy and relevance of our AI solutions. However, we do not use private or proprietary user content (e.g., uploaded documents) to train third-party models unless explicitly authorized by the user.
5. Acceptable Use
5.1 Prohibited Uses
Our AI Products are intended for educational, humanitarian, and socially constructive purposes. Accordingly, you shall not use them for:
Illegal or Harmful Purposes: Any activity that violates local, national, or international laws, or that promotes violence, terrorism, or abuse.
Harassment or Discrimination: Content that is threatening, harassing, defamatory, hateful, or otherwise discriminatory toward individuals or groups.
Misinformation or Manipulation: Creating deceptive materials, deepfakes, political propaganda, or fraudulent content.
Spam or Malicious Content: Disseminating large-scale spam, malicious code, or content-farming materials.
Privacy Violations: Submitting others’ personal information without consent, or infringing upon data protection laws and regulations.
Intellectual Property Infringement: Violating copyright, trademark, or other intellectual property rights.
Prompt Injection or Exploits: Attempting to discover or manipulate the underlying source code or logic in unauthorized ways.
5.2 High-Risk Areas
Users must not deploy our AI Products in contexts deemed “high risk” under applicable AI legislation—such as EU AI Act domains—unless explicitly approved in writing by NRIDL. This includes any use that may threaten individual rights, health, or public safety.
6. Compliance & Enforcement
6.1 Reporting Misuse
If you notice any misuse of our AI Products—such as generating harmful content, infringing on privacy, or otherwise violating these standards—please contact us at [Contact Email]. We are committed to investigating and taking appropriate corrective action.
6.2 Consequences of Violation
Violations of this Policy or our other terms may lead to suspension or termination of access to our Services. We reserve the right to take additional legal measures if warranted.
7. Use of Third-Party Service Providers
NRIDL may integrate third-party AI services to enhance features such as language translation, sentiment analysis, or generative text. These third-party providers remain subject to their own terms of use and policies, which we encourage you to review independently.
We strive to work only with partners who share our values of data privacy, safety, and ethical development.
8. Security, Privacy, and Trust
8.1 Security Measures
We employ technical and organizational safeguards to protect the confidentiality and integrity of data processed by our AI Products. Although no system is entirely immune to security threats, we regularly review and update our protocols to mitigate risks.
8.2 Ethical & Safety Reviews
Before releasing major updates or new AI features, we conduct ethical and safety reviews to identify and address potential biases, harmful outcomes, or compliance issues. We encourage users to exercise caution and judgment when relying on AI-generated content, especially in critical domains such as public health or financial advice.
8.3 Transparency & Disclosure
Features that rely on third-party AI platforms will be clearly marked, and we will provide appropriate disclosures indicating the nature of the AI being used. We encourage users to attribute any AI-generated content to maintain transparency and trust.
9. Updates to this Policy
NRIDL reserves the right to update or modify this AI Policy at any time. Significant changes will be announced on our website or via other relevant channels, and the updated version will be effective immediately upon posting. Your continued use of our AI Products after such changes implies acceptance of the revised terms.
10. Contact Us
If you have questions or concerns regarding this AI Policy or our AI practices, please reach out to us.
We value your feedback and remain committed to improving our AI Products in alignment with our mission of democratizing learning, closing the digital divide, and fostering inclusive, equitable innovation.