Skip to content
  • Home
  • Watch Demos
  • Pricing
  • Documentation
  • About Us
  • IQNOX
Let's Talk

Onboarding

2
  • Codebeamer setup
  • Getting started

Codebeamer AI Features

7
  • Analyze items in IQNECT
  • Decompose items in IQNECT
  • Use Autocomplete in requirement editors
  • Find items with Semantic and Advanced Search
  • View dashboards in IQNECT
  • Explore traces in IQNECT
  • Initialize indexing from the Indexing UI

SSO Configuration/Setup

7
  • SSO Introduction
  • Okta SAML
  • Okta OIDC
  • Microsoft Entra SAML
  • Generic OIDC
  • Generic SAML
  • Microsoft Entra OIDC

Trust

7
  • How Often does IQNECT Update its AI Models?
  • IQNOX use of Open AI
  • Responsible AI and IQNECT
  • Does IQNECT train models with my data?
  • IQNOX use of Qdrant
  • Does IQNECT continuously pull in new data from Codebeamer as we add requirements to projects?
  • How does IQNECT handle feature requests and release cycles?

Release notes

8
  • 1.3.1
  • 1.5.0
  • 1.3.0
  • 1.4.0
  • 1.5.1
  • 26.1.1
  • 25.4.2
  • 26.1

IQNECT 1.5.0

5
  • Add traces via drag and drop
  • Run IQNECT in Codebeamer X
  • Select existing item suggestions in Decompose
  • Navigate the End-User Admin UI
  • Import a document to Codebeamer

Licensing and Administrative

1
  • Licensing Basis

IQNECT 26.1.0

5
  • Permission control and Resource Shape management
  • Improved Semantic Search with Quick Filters
  • Bulk Refactoring for multiple items
  • Using Find Similar to identify duplicate items
  • Hierarchy preservation and manipulation in Document Importer
View Categories
  • Home
  • Support
  • Trust
  • Responsible AI and IQNECT

Responsible AI and IQNECT

2 min read

IQNECT is developed with a strong commitment to responsible AI practices. #

IQNECT leverages OpenAI’s API, which is built and maintained in accordance with industry-leading Responsible AI principles.


IQNECT platform safeguards #

IQNOX has implemented several product-level measures to prevent and manage inappropriate AI output:

  • Human-in-the-loop validation: AI-generated content is presented for user review prior to being applied to systems or shared externally.
  • Role-based access controls (RBAC): Sensitive AI functions (such as system write-backs or integration with production platforms) are restricted to authorized users.
  • Context scoping: Prompts and output formatting are tailored to stay within defined domains (e.g., technical documentation, requirements analysis) to minimize the risk of inappropriate or out-of-scope recommendations.
  • Audit logging and traceability: All AI-generated content is logged and traceable to ensure accountability and transparency in usage.

Leveraging OpenAI’s safety features #

IQNECT uses OpenAI’s API as its AI engine. IQNECT uses OpenAI’s Application Programming Interface (API) to process customer data for activities such as requirements analysis. OpenAI applies numerous safety mitigations at the model and infrastructure level, including:

  • Content moderation filters that detect and block harmful, unsafe, or policy-violating content.
  • Alignment training that reduces the likelihood of biased, toxic, or misleading responses.
  • Preparedness framework to manage existential AI safety risk.
  • Usage monitoring for potential misuse or abuse of generative capabilities.
  • Compliance with standards like SOC2 and CSA STAR.

These guardrails are continuously updated as model capabilities evolve.

OpenAI does not train its models on IQNOX data processed through the OpenAI API.

Additionally, OpenAI automatically deletes data processed via the API within 30 days (barring a legal hold). OpenAI is contractually bound to protect the confidentiality of customer data provided via the API. OpenAI does this with industry-standard measures such as encrypting all data at rest (using AES-256) and in transit (using TLS 1.2+). It also offers a Bug Bounty Program for responsible disclosure of vulnerabilities discovered on its platform and products. 

Finally, OpenAI has undergone a SOC 2 Type II examination of its security controls.


Customer-controlled guardrails #

Where applicable, IQNOX allows customers to:

  • Configure AI use cases appropriate to their risk profile
  • Disable or restrict access to certain models or provider features
  • Submit feedback on undesired responses for review and adjustment

IQNOX continually monitors and improves its controls in response to advancements in generative AI and evolving customer requirements.

Updated on 2026-03-03

What are your Feelings

IQNOX use of Open AIDoes IQNECT train models with my data?
Table of Contents
  • IQNECT is developed with a strong commitment to responsible AI practices.
    • IQNECT platform safeguards
    • Leveraging OpenAI’s safety features
    • Customer-controlled guardrails
IQNECT-horizontal-dark-text
  • Home
  • Watch Demos
  • About Us
  • IQNOX
  • Pricing
  • Contact Us
Linkedin Twitter Youtube
©️ 2026 IQNOX. All rights reserved.
  • VDP
  • Terms
  • Privacy
  • Cookies