Member-only story

Comparing LangChain and LlamaIndex with 4 tasks

Ming
10 min readJan 11, 2024

Update: Someone translated this post into Chinese without mentioning so :( Maybe they didn’t know I read Mandarin. That being said, they did proofread their machine-translated copy, so I’m upset and happy at the same time.

LangChain v.s. LlamaIndex — How do they compare?

Generated via MidJourney. Prompt: “A llama and a chain, facing off.” Apparently, MidJourney misunderstood.

In Why RAG is big, I stated my support for Retrieval-Augmented Generation (RAG) as the key technology of private, offline, decentralized LLM applications. When you build something for your own use, you’re fighting alone. You can build from scratch, but it would be more efficient to build upon an existing framework.

AFAIK, two choices exist, aiming at different scopes:

  • LangChain, a generic framework for developing stuff with LLM.
  • LlamaIndex, a framework dedicated for building RAG systems.

Picking a framework is a big investment. You want one that enjoys strong maintainers and vibrant communities. Fortunately, both choices have incorporated last year, so the sizes are quite quantifiable. Here’s how the numbers compare:

I wish Medium can have tables.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Responses (11)

Write a response

Very nice post. One question - how exactly did you set up the "LLM running in a standalone inference server"? Using Ollama? Thanks!

--

Very well-written comparison. Many thanks and congratulation. One request, if possible please update the LlamaIndex code to make it compatible with LlamaIndex v0.10.x, seems like there are lots of changes with the 0.10 update.

--