How to use output parsers to parse an LLM response into structured format
Language models output text. But there are times where you want to get more structured information than just text back. While some model providers support built-in ways to return structured output, not all do.
Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:
- “Get format instructions”: A method which returns a string containing instructions for how the output of a language model should be formatted.
- “Parse”: A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.
And then one optional one:
- “Parse with prompt”: A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.
Get started
LCEL
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-3.5-turbo-0125",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/firefunction-v1",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
import { RunnableSequence } from "@langchain/core/runnables";
import { StructuredOutputParser } from "@langchain/core/output_parsers";
import { PromptTemplate } from "@langchain/core/prompts";
const parser = StructuredOutputParser.fromNamesAndDescriptions({
answer: "answer to the user's question",
source: "source used to answer the user's question, should be a website.",
});
const chain = RunnableSequence.from([
PromptTemplate.fromTemplate(
"Answer the users question as best as possible.\n{format_instructions}\n{question}"
),
model,
parser,
]);
console.log(parser.getFormatInstructions());
You must format your output as a JSON value that adheres to a given "JSON Schema" instance.
"JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.
For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}
would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.
Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!
Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:
```json
{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}
```
const response = await chain.invoke({
question: "What is the capital of France?",
format_instructions: parser.getFormatInstructions(),
});
console.log(response);
{ answer: "Paris", source: "https://en.wikipedia.org/wiki/Paris" }
Output parsers implement the Runnable
interface, the basic building
block of the LangChain Expression Language
(LCEL). This means they support invoke
,
stream
, batch
, streamLog
calls.
Output parsers accept a string or BaseMessage
as input and can return
an arbitrary type.
While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output.
The JsonOutputParser
for example can stream through partial outputs:
const stream = await chain.stream({
question: "What is the capital of France?",
format_instructions: parser.getFormatInstructions(),
});
for await (const s of stream) {
console.log(s);
}
{
answer: "The capital of France is Paris.",
source: "https://en.wikipedia.org/wiki/Paris"
}
import { JsonOutputParser } from "@langchain/core/output_parsers";
const jsonPrompt = PromptTemplate.fromTemplate(
"Return a JSON object with an `answer` key that answers the following question: {question}"
);
const jsonParser = new JsonOutputParser();
const jsonChain = jsonPrompt.pipe(model).pipe(jsonParser);
for await (const s of await jsonChain.stream({
question: "Who invented the microscope?",
})) {
console.log(s);
}
{}
{ answer: "" }
{ answer: "The" }
{ answer: "The microscope" }
{ answer: "The microscope was" }
{ answer: "The microscope was invented" }
{ answer: "The microscope was invented by" }
{ answer: "The microscope was invented by Zach" }
{ answer: "The microscope was invented by Zacharias" }
{ answer: "The microscope was invented by Zacharias J" }
{ answer: "The microscope was invented by Zacharias Jans" }
{ answer: "The microscope was invented by Zacharias Janssen" }
{ answer: "The microscope was invented by Zacharias Janssen and" }
{ answer: "The microscope was invented by Zacharias Janssen and his" }
{
answer: "The microscope was invented by Zacharias Janssen and his father"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans in"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans in the"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16th"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16th century"
}
{
answer: "The microscope was invented by Zacharias Janssen and his father Hans in the late 16th century."
}