Skip to main content

LangChain Hub

Efficiently manage your LLM components with the LangChain Hub. For dedicated documentation, please see the hub docs.

  • RetrievalQA Chain: use prompts from the hub in an example RAG pipeline.
  • Prompt Versioning ensure deployment stability by selecting specific prompt versions over the 'latest'.
  • Runnable PromptTemplate: streamline the process of saving prompts to the hub from the playground and integrating them into runnable chains.

Was this page helpful?