334 elif self.search_type == "mmr": 335 docs = self.vectorstore.max_marginal_relevance_search( 336 query, **self.search_kwargs 337 ) File ~/anaconda3/envs/Langchain/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:171, in Chroma.similarity_search(self, query, k, filter, ...
} retriever = vectordb.as_retriever(search_type="mmr", search_kwargs=search_kwargs) chain = ConversationalRetrievalChain.from_llm( llm, retriever=retriever, chain_type="stuff", verbose=True, max_tokens_limit=4096, ) chain({"question": "ABC ABC ABC ABC", "chat_history":[]})``` ###...
最大边缘相关(maximal marginal relevance,MMR),MMR选择标签时,标签之间不是彼此独立,而是迭代地选择标签,一次为项目添加一个标签。给定一个项目i及相应的标签集合Ti,MMR选择下一个标签时,最大化公式:$MMR(t;T_i)=(\lambda Sim_{item}(t,i)-(1-\lambda)max_{t \in T_i}Sim_{tag}(t_i,...
Issue you'd like to raise. I found that using a RetrievalQA for streaming outputs gibberish response. For example, using a RetrievalQA with code below on the state_of_the_union.txt example: doc_chain = load_qa_chain( llm=ChatOpenAI( stre...