RT @hwchase17: ⭐️Contextual Compression⭐️
We introduce multiple new methods in @LangChainAI to compress retrieved documents w.r.t. the query before passing to an LLM for generation
Inspired by @willpienaar at the "LLMs in production" conference
Blog: https://t.co/bs7psooUry
🧵More details: