Class LabeledCriteriaEvalChain

Criteria evaluation chain that requires references.

Hierarchy

Constructors

Properties

outputKey: string = "text"
outputParser: BaseLLMOutputParser<EvalOutputType> = ...
requiresInput: boolean = true
requiresReference: boolean = true
skipReferenceWarning: string = ...
verbose: boolean

Whether to print out response text.

callbacks?: Callbacks
criterionName?: string
evaluationName?: string = ...
memory?: BaseMemory
metadata?: Record<string, unknown>
skipInputWarning?: string = ...
tags?: string[]

Accessors

  • get inputKeys(): string[]
  • Returns string[]

  • get outputKeys(): string[]
  • Returns string[]

Methods

  • Check if the evaluation arguments are valid.

    Parameters

    • Optional reference: string

      The reference label.

    • Optional input: string

      The input string.

    Returns void

    Throws

    If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.

  • Evaluate Chain or LLM output, based on optional input and label.

    Parameters

    Returns Promise<ChainValues>

    The evaluation results containing the score or value. It is recommended that the dictionary contain the following keys:

    • score: the score of the evaluation, if applicable.
    • value: the string value of the evaluation, if applicable.
    • reasoning: the reasoning for the evaluation, if applicable.
  • Invoke the chain with the provided input and returns the output.

    Parameters

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Create a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.

    Type Parameters

    • NewRunOutput

    Parameters

    • coerceable: RunnableLike<ChainValues, NewRunOutput>

      A runnable, function, or object whose values are functions or runnables.

    Returns RunnableSequence<ChainValues, Exclude<NewRunOutput, Error>>

    A new runnable sequence.

  • Format prompt with values and pass to LLM

    Parameters

    Returns Promise<EvalOutputType>

    Completion from LLM.

    Example

    llm.predict({ adjective: "funny" })
    
  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

    Parameters

    Returns AsyncGenerator<RunLogPatch, any, unknown>

  • Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.

    Parameters

    Returns AsyncGenerator<ChainValues, any, unknown>

  • Resolve the criteria to evaluate.

    Parameters

    • Optional criteria: CriteriaLike

      The criteria to evaluate the runs against. It can be: - a mapping of a criterion name to its description - a single criterion name present in one of the default criteria - a single ConstitutionalPrinciple instance

    Returns Record<string, string>

    A dictionary mapping criterion names to descriptions.

Generated using TypeDoc