You can use Quotient’s SDK to automatically detect hallucinations and other reliability issues in your AI outputs.

Initialize the Logger with Detections

from quotientai import QuotientAI, DetectionType

quotient = QuotientAI(api_key="your-quotient-api-key")

logger = quotient.logger.init(
    app_name="my-first-app",
    environment="dev",
    sample_rate=1.0,
    # this will automatically run hallucination detection on 100% of your model outputs in relation to the documents you provide
    detections=[DetectionType.HALLUCINATION, DetectionType.DOCUMENT_RELEVANCY],
    detection_sample_rate=1.0,
)

Send Logs with Detections Enabled

log_id = quotient.log(
    user_query="What is the capital of France?",
    model_output="The capital of France is Paris.",
    documents=[
        "France is a country in Western Europe.",
        "Paris is the capital of France.",
    ],
)

Poll for Detections

Synchronously poll for detection results using the client:

detection = quotient.poll_for_detection(log_id=log_id)

Parameters:

  • log_id (string): The log ID of the log you want to poll for detections.
  • timeout (int): The maximum time to wait for a response in seconds. Defaults to 300.
  • poll_interval (float): The interval between checks in seconds. Defaults to 2.0.

Returns:

  • Log (object): A Log object containing the following fields:
    • id (string): Unique identifier for the log entry.
    • app_name (string): Name of the application that generated the log.
    • environment (string): Environment where the log was generated (e.g., “dev”, “prod”).
    • detections (array): List of detection types that were configured for this log.
    • detection_sample_rate (float): Sample rate used for detections on this log.
    • user_query (string): The original user query or prompt that was logged.
    • model_output (string): The model’s response that was logged.
    • documents (array): List of documents used as context for the model. Can be strings or LogDocument objects.
    • message_history (array): Previous messages in the conversation, following the OpenAI message format.
    • instructions (array): List of instructions provided to the model.
    • tags (object): Dictionary of tags associated with the log entry.
    • created_at (datetime): Timestamp when the log was created.
    • status (string): Current status of the log entry.
    • has_hallucination (boolean): Whether the model output was detected to contain hallucinations.
    • doc_relevancy_average (float): Average relevancy score for the documents provided.
    • updated_at (datetime): Timestamp when the log was last updated.
    • evaluations (array): List of evaluation results for the log entry.

Detections Dashboard

Go to the Detections Dashboard to see your logs and any detected hallucinations.

Hallucinations

How do we define hallucinations?

The hallucination rate measures how often a model generates information that cannot be found in its provided inputs, such as retrieved documents, user messages, or system prompts.

Quotient reports an extrinsic hallucination rate: we determine whether the model’s output is externally unsupported by the context it was given.

How do we detect hallucinations?

  1. Break output into individual claims or sentences
  2. Compare each claim to available context, including:
    • user_query (what the user asked)
    • documents (retrieved evidence)
    • message_history (prior turns in the conversation)
  3. Flag claims that lack support in any of the above inputs as hallucinations

If a sentence cannot be traced back to any context, it’s counted as a hallucination.

Why is it important to monitor your AI system for hallucinations?

Extrinsic hallucinations are the primary failure mode in augmented AI systems. Even when retrieval succeeds, generation can drift. This metric helps teams:

  • Catch hallucinations early in development
  • Monitor output quality post-deployment
  • Guide prompt iteration and model fine-tuning

Well-grounded systems typically show < 5% hallucination rate. If yours is higher, it’s often a signal that either your data ingestion, retrieval pipeline, or prompting needs improvement.

Document Relevance

How do we define document relevance?

Document Relevance measures how well your retrieval or search system finds context that’s actually useful for answering the user’s query. Specifically, it quantifies how relevant the retrieved documents (or chunks) are to what the user asked.

A document is considered relevant if it contains information that addresses at least one part of the query. If it does not address any part, it is marked as irrelevant.

The Document Relevance score is calculated as the fraction of documents that are relevant to at least one part of the user query.

How do we measure document relevance?

  1. Compare each document (or chunk) against the full user_query.
  2. Determine whether the document contains information relevant to any part of the query:
    • If it does, mark it as relevant
    • If it doesn’t, mark it as irrelevant
  3. Compute the overall document relevance score as:relevant_documents / total_documents .

Why is it important to monitor document relevance?

Document Relevance is a core metric for evaluating search- and retrieval-augmented systems. Even if the AI model generates well, weak retrieval can negatively impact the quality of the response. This metric helps teams:

  • Assess whether retrieval is surfacing useful context
  • Debug cases where generation fails despite successful prompting
  • Improve recall/precision of retrieved results
  • Monitor drift after retriever or data changes

High-performing systems typically show > 75% document relevance. Lower scores may signal ambiguous user queries, incorrect retrieval, or noisy source data.