Haystack: A Great LangChain Alternative for RAG (RAG Evaluation Series)
This is the third installment in a multi-part series I am doing to evaluate various RAG systems using Tonic Validate’s, a RAG evaluation and benchmarking platform, open source package tvalmetrics. All the code and data used in this article is available here. I’ll be back in a bit with another comparison of more RAG tools!
I (and likely others) am curious to hear how you’ve been using the tool to optimize your RAG setups! Use tvalmetrics to score your RAG and Tonic Validate to visualize experiments, and let everyone know what you’re building, which RAG system you used, and which parameters you tweaked to improve your scores on X (@tonicfakedata). Bonus points if you also include your charts from the UI. We’ll promote the best write-up and send you some Tonic swag as well
Introduction
After last week’s post, one reader requested that we take a look at Haystack. Having never come across Haystack before, I was curious. After looking at Haystack’s Github, it seems to be focused more on the whole LLM pipeline as opposed to RAG itself. In particular, it includes features for fine-tuning, semantic search, and decision making alongside normal RAG capabilities. It even includes features for users to give feedback so you can improve your models. At face value, it seems that the barrier to entry to building customized GPTs has been lowered with a product like Haystack. However, these are hard problems to solve and different projects are taking different approaches that have different tradeoffs and results. So, it’s always good to test your setups before putting them into production to ensure optimal performance. In this article, I am going to help you out and compare Haystack to LangChain (another end-to-end RAG library) to see how both of them do and provide my opinion on which is best for production workloads.
Setting up the experiment
Preparing Haystack
To get started with Haystack, I have a collection of 212 essays from Paul Graham. You can read more about how I prepared this test set for the experiments in my earlier blog post, here. To ingest these documents into Haystack, I ran the following code:
from haystack import Pipeline
from haystack.document_stores import InMemoryDocumentStore
from haystack.utils import add_example_data
from haystack.nodes import EmbeddingRetriever, PromptNode, PromptTemplate, AnswerParser
def get_haystack():
document_store = InMemoryDocumentStore(similarity="dot_product", embedding_dim=1536)
add_example_data(document_store, "paul_graham_essays")
retriever = EmbeddingRetriever(
document_store=document_store,
batch_size=8,
embedding_model="text-embedding-ada-002",
api_key=os.environ["OPENAI_API_KEY"],
max_seq_len=1536,
progress_bar=False
)
document_store.update_embeddings(retriever)
prompt_template = PromptTemplate(
prompt="deepset/question-answering",
output_parser=AnswerParser(),
)
prompt_node = PromptNode(
model_name_or_path="gpt-4-1106-preview",
api_key=os.environ["OPENAI_API_KEY"],
default_prompt_template=prompt_template,
max_length=2048
)
query_pipeline = Pipeline()
query_pipeline.add_node(component=retriever, name="retriever", inputs=["Query"])
query_pipeline.add_node(component=prompt_node, name="prompt_node", inputs=["retriever"])
return query_pipeline
haystack_pipeline = get_haystack()
def get_haystack_response(query):
haystack_response = haystack_pipeline.run(query=query)
return haystack_response['answers'][0].answer
This code sets up a document store to hold our embeddings. Then it adds all the essays to the document store and computes embeddings for them. After that, it configures the pipeline to search the embeddings and query GPT-4 Turbo.
Preparing LangChain
Similar to Haystack, I am going to set up a document store to hold the embeddings using the following code:
def get_langchain_retriever():
loader = DirectoryLoader('paul_graham_essays', show_progress=True)
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter()
split_docs = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(split_docs, embeddings)
return db.as_retriever()
After setting that up, I can set up our pipeline which searches the document store.
def format_docs(docs):
return "\n\n".join([d.page_content for d in docs])
def get_langchain():
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI(model_name="gpt-4-1106-preview")
retriever = get_langchain_retriever()
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
return chain
langchain_pipeline = get_langchain()
An easy job to start things off
To start the comparison, let’s give both Haystack and LangChain an easy question about one of the essays:
print(get_haystack_response(
"What was Airbnb's monthly financial goal to achieve ramen profitability during their time at Y Combinator?"
))
print(langchain_pipeline.invoke(
"What was Airbnb's monthly financial goal to achieve ramen profitability during their time at Y Combinator?"
))
Both systems gave the same answer to the question (which is the correct answer).
Airbnb's monthly financial goal to achieve ramen profitability during their time at Y Combinator was $4000 a month.
Evaluating the RAG Systems
To run a more thorough analysis on these systems, I am going to use tvalmetrics, which provides an easy way to score RAG systems based on various metrics (you can read more about these in the GitHub repo). In our case, we are going to use the answer similarity score, which scores how similar the LLM’s answer is to the correct answer for a given question. For running tvalmetrics, I created a benchmark of 55 question-answer pairs from a random selection of 30 Paul Graham essays. I can then run both RAG systems through all the questions, collect the RAG-facilitated LLM responses, and pass both the LLM’s answers and the ideal answers from the benchmark set to tvalmetrics. Using these data, tvalmetrics will automatically score the LLM responses, giving me a quantiative idea of how each RAG system is performing.
To get started with this, I ran the following code to load the questions and gather the LLM’s answers using both RAG systems:
from concurrent.futures import ThreadPoolExecutor
from tqdm import tqdm
# Load questions from qa_pairs.json
import json
qa_pairs = []
with open('qa_pairs.json', 'r') as qa_file:
qa_pairs = json.load(qa_file)
# Using ThreadPoolExecutor to process questions in parallel
with ThreadPoolExecutor(max_workers=10) as executor:
# Map the process_question function to each question in the list
langchain_responses = list(tqdm(executor.map(langchain_pipeline.invoke, question_list), total=len(question_list)))
with ThreadPoolExecutor(max_workers=10) as executor:
haystack_responses = list(tqdm(executor.map(get_haystack_response, question_list), total=len(question_list)))
After the LLM’s answers are stored, I can pass them to tvalmetrics to score them.
haystack_scores = score_calculator.score_batch(
question_list=question_list,
reference_answer_list=answer_list,
llm_answer_list=haystack_responses,
)
Langchain_scores = score_calculator.score_batch(
question_list=question_list,
reference_answer_list=answer_list,
llm_answer_list=langchain_responses,
)
After tvalmetrics is finished processing, I observed the following results:
Across the board, Haystack performed better, although LangChain’s performance was also strong (especially considering the low scores of competitor systems like OpenAI Assistants). With a higher average and minimum answer similarity score, Haystack’s RAG system provided a correct (or close to correct) answer more often than LangChain’s system. The lower standard deviation further means that response quality was more consistent across the 55 tests I ran on both systems. The results are promising, but I encourage you to use tvalmetrics and replicate the experiment using your own models and benchmarks.
Conclusion
While both systems performed well, Haystack is the winner here, as a whole performing better than LangChain. I also found that Haystack’s system was a lot easier to work with. The documentation quality of Haystack was drastically better than LangChain, and I would recommend using Haystack’s system in production for this reason. An exception to this is if you need to integrate RAG with a complex system like agents. In that case, LangChain’s integration with their agent framework makes it a much more attractive option, and in general LangChain is built for setups like that where you are orchestrating many services across the whole stack. However, if you are using RAG to build or improve a simple chatbot, then you’ll be perfectly fine using Haystack.
All the code and data used in this article is available here. I’m curious to hear your take on OpenAI Assistants, GPTs, and tvalmetrics! Reach out to me at ethanp@tonic.ai.