Chapter 80 – LangChain Chat AI Model Applied in AI Web App & DApp for Beginners

Standalone AI models and apps like OpenAI, Google Vertex AI PaLM  are just a starting point. From my perspective, how to apply diversified models to streamline and boost productivity of a workflow in every specialising area is the key step to be able to really leverage their beauty and sex pros.

In this piece, I would walk through briefly regarding LangChain Chat AI models for beginners who are interested in building your owned chain model in marketing, for the purpose to boost productivity and create continuous hits in your game.

Standalone AI models and apps like OpenAI, Google Vertex AI PaLM  are just a starting point. From my perspective, how to apply diversified models to streamline and boost productivity of a workflow in every specialising area is the key step to be able to really leverage their beauty and sex pros.

In this piece, I would walk through briefly regarding LangChain Chat AI models for beginners who are interested in building your owned chain model in AI Web App & DApps, for the purpose to boost productivity and create continuous hits in your game.

Table of Contents: LangChain Chat AI Model Applied in Marketing Apps for Beginners

Why LangChain and Select Chat AI Model

LangChain is a software development framework designed to simplify the creation of applications using large language models. Although developers can also build the chain using other framework, there are some outstanding features using LangChain:

  • Chain framework simplifies the coding workload generally 
  • Faster prompt response between requests
  • Given diversified AI model integration
  • Given 3rd party external modules integration
  • Trackable performance

From a financial perspective, using an AI module applied in work application basically is a pay-as-you-go model. Thus, variable cost is a key factor for any business to consider because it might create profit margin bottle neck or even cause losing money if it’s improper to leverage.

Generally, Chat AI models like OpenAI GPT3.5 are the cheapest one in terms of token per thousand, compared to non-Chat-driven models. If the performance and output capacity is similar to a higher level model, Chat AI model is preferred and notably the model is being used in a chain, which implies the volume would be large.

LangChain Chat AI Model Fundamental settings

In this piece, I would take OpenAI as the sample. Being said that you can use other AI models based on actual needs. Although there are tiny difference such as model name import, basically the logic, underlying functions and flow are similar to each other.

Here are the LangChain Python library, packages to import and main functions to add on

  • Library Installation

pip install langchain

  • Import required packages

import time

from langchain.chat_models import ChatOpenAI

from langchain import PromptTemplate, LLMChain

from langchain.prompts.chat import (

   ChatPromptTemplate,

   SystemMessagePromptTemplate,

   AIMessagePromptTemplate,

   HumanMessagePromptTemplate,

)

from langchain.schema import (

   AIMessage,

   HumanMessage,

   SystemMessage

)

from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

from langchain.callbacks import get_openai_callback

  • Core function:

llm = ChatOpenAI(temperature=0.7, model="gpt-3.5-turbo", openai_api_key=")

Single Hit in one-way AI communication

LangChain can fulfill a single request as well as you use ChatGPT or OpenAI API. Below is a sample using HumanMessage parameter and content argument connecting with prompts.

response = llm([HumanMessage(content=prompt)])

Single Hit in an interaction with AI system

Apart from one-way, we can set up a role or share context dataset in advance to LangChain and the AI model, for the purpose to get more accurate results.

messages = [

   SystemMessage(content="You are a marketing specialist and specialise in writing engaging Tiktok Short form video scripts in English."),

   HumanMessage(content="Write me a Tiktok Short form video script to engage with Japanese Ramen fans")

]

N+ Prompt templates using one AI model

In reality, one output from a marketing application is more complex than two given samples  above. Basically, there are many variable factors and prompt templates working together to generate even if using the same AI model.

Take Marketing as the sample. As a marketing talent, writing copy is a skillset that can be applied to SEM, SEO, affiliate, social, email, product listing, video, advertorial, PR and so on and so forth. Furthermore, it might deal with different project content, brand story, existing performance etc. Thus, it requires you to equip with a collection of prompt templates to build this app and automate the process. 

template="You are a {role} specialist and specialise in writing engaging {object} copies in English."

system_message_prompt = SystemMessagePromptTemplate.from_template(template)

human_template="{text}"

human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])

response123 = llm(chat_prompt.format_prompt(role="marketing", object="email", text="Write me a Welcomed email to a first time purchase shoe buyer.").to_messages())

N+ AI Models in one chain

Apart from using the same model, as mentioned earlier, we might like to consider both AI performance and financial factors at the same time. That implies we need different AI models in one task or multi tasks in a chain. For example, writing a tweet doesn’t need a fancy and expensive AI model due to the character limit as long as it’s able to read through the raw materials. On the other hand, generating a complex research article or data analysis needs more advance model to fulfill. Here deals with a mindset to opitmise the AI model efficient frontier.

   chain = LLMChain(llm=llm, prompt=chat_prompt)

   finalResult = chain.run(role="marketing", object="email", text="Write me a Welcomed email to a first time purchase shoe buyer.")

Miscellaneous – Tracking total token spent

If you are a AI model reseller or AI App provider, tracking total token spent is critical because you need it to calculate the cost. LangChain provides a AI Model callback package. Here is a ample of OpenAI

with get_openai_callback() as cb:

   buyfromloAPIandAIYes = directResponse()

   print( buyfromloAPIandAIYe)

   buyfromloAPIandAI = cb

   print(buyfromloAPIandAI)

Tokens Used: 241

        Prompt Tokens: 17

        Completion Tokens: 224

Successful Requests: 2

Total Cost (USD): $0.000482

BuyfromLo AI APIs and Onsite App

If you like to save time from building an App or API using LangChain, please check out BuyfromLo API and Onsite App store. Now we are running a free trial program, go!

https://www.buyfromlo.com

Full Core functional Python Script of LangChain Chat AI Model Applied in Marketing Apps

If you are interested in Chapter 80 – LangChain Chat AI Model Applied in AI Web App & DApp for Beginners, please subscribe to our newsletter by adding the message ‘Chapter 80 + Full LangChain scripts for AI Apps’. We would send you the script when up-to-date app script is live.

I hope you enjoy reading Chapter 80 – LangChain Chat AI Model Applied in AI Web App & DApp for Beginners. If you did, please support us by doing one of the things listed below, because it always helps out our channel.