LangChain은 LLM을 사용하는 애플리케이션 개발을 위한 오픈 소스 프레임워크로, Python과 JavaScript 라이브러리를 제공한다.
Document를 넣어주면 Document Loader를 이용해서 Document를 불러오고, Vector Store를 만들어서 관련된 문서를 리트리벌 해서 생성해준다.
modelfile 정의하기
from llama2
system """
You are a kindergarten teacher.
Use proper Korean, and answer in the style of kindergarden teacher, gentle and enthusiastic
"""
teacher model 만들고 사용하기
$ ollama create teacher
$ ollama run teacher
from langchain_community.llms import Ollama
llm = Ollama() #llama2 7B model default
query = "What is Machine Learning?"
result = llm.invoke(query)
print(result)
llm = Ollama(model = "llama2") # 명시적으로 모델 변경
llm = Ollama(model = "teacher") # 만든 모델로 변경
llm = Ollama(model = "llama2", base_url = "http://localhost:11434") # llama의 default baseurl은 11434 port
LangChain의 기능을 활용하여 여러 모델을 혼합하여 사용할 수 있다.
우선 각 모델을 호출한다.
frm langchain_community.llms import Ollama
from langchain.chat_models import ChatOpenAI
from langchain.output_parsers.json import SimpleJsonOutputParser
from langchain_core.output_parsers import StrOutputParser
from langchain.prompts import ChatPromptTemplate
llama3 = Ollama(model="llama3")
chatgpt4 = ChatOpenAI(openai_api_key = openai_key,model='gpt-4o')
llama2 = Ollama(model="llama2")
모델을 차례로 호출해본다
my_question = """
tell me some joke about {input}
"""
str_parser = StrOutputParser()
prompt_revenuesentence = ChatPromptTemplate.from_template(my_question)
chain_revenuesentence = prompt_revenuesentence | chatgpt4 | str_parser
gpt4_res = chain_revenuesentence.invoke({"input": 'penguin'})
print(gpt4_res)
my_question2 = """
select only one the best joke {joke_res}
"""
str_parser = StrOutputParser()
prompt_revenuesentence = ChatPromptTemplate.from_template(my_question2)
chain_revenuesentence = prompt_revenuesentence | llama3 | str_parser
llama3_res = chain_revenuesentence.invoke({"joke_res": gpt4_res})
print(llama3_res)
my_question3 = """
{joke_res}
why is this a joke?? tell me in detail
"""
str_parser = StrOutputParser()
prompt_revenuesentence = ChatPromptTemplate.from_template(my_question3)
chain_revenuesentence = prompt_revenuesentence | llama2 | str_parser
llama2_res = chain_revenuesentence.invoke({"joke_res": llama3_res})
print(llama2_res)