Focus on helping people not on managing prompts.
pip install prompthandler
An example code to make the model chat with you in terminal
from prompthandler import Prompthandler
model = Prompthandler()
model.add_system("You are now user's girlfriend so take care of him",to_head=True) # Makes this to go on to the head. Head is not rolled so it stays the same
model.add_user("Hi")
model.chat() # you can chat with it in terminalFor more examples see examples.ipynb Example Projects my silicon version github_repo
This class represents the interaction with the GPT-3.5-turbo model (or other OpenAI models). It provides methods for generating completions for given messages and managing the conversation history.
Attributes:
api_key(str): The OpenAI API key.model(str): The name of the OpenAI model to use.MAX_TOKEN(int): The maximum number of tokens allowed for the generated completion.temperature(float): The temperature parameter controlling the randomness of the output.
Methods:
-
__init__(self, api_key=None, MAX_TOKEN=4096, temperature=0, model="gpt-3.5-turbo-0613"): Initializes the OpenAI chat model with the provided settings. -
get_completion_for_message(self, message, temperature=None): Generates a completion for a given message using the specified OpenAI model.message(list): List of messages representing the conversation history.temperature(float): Control the randomness of the output. If not provided, it uses the default temperature.
Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.
This class represents a conversation prompt history for interacting with the GPT-3.5-turbo model (or other OpenAI models). It extends the openai_chat_gpt class and provides additional methods for handling prompts, headers, and body messages.
Attributes (in addition to openai_chat_gpt attributes):
headers(list): List of header messages in the conversation history.body(list): List of body messages in the conversation history.head_tokens(int): Total tokens used in the headers.body_tokens(int): Total tokens used in the body.tokens(int): Total tokens used in the entire message history.
Methods (in addition to openai_chat_gpt methods):
-
__init__(self, MAX_TOKEN=4096, api_key=None, temperature=0, model="gpt-3.5-turbo-0613"): Initializes thePromptHandlerwith the specified settings. -
get_completion(self, message='', update_history=True, temperature=None): Generates a completion for the conversation history.message(str): The user's message to be added to the history.update_history(bool): Flag to update the conversation history.temperature(float): Control the randomness of the output.
Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.
-
chat(self, update_history=True, temperature=None): Starts a conversation with the model. Accepts terminal input and prints the model's responses.update_history(bool): Flag to update the conversation history.temperature(float): Control the randomness of the output.
-
update_messages(self): Combines the headers and body messages into a single message history.Returns the combined list of messages representing the conversation history.
-
update_tokens(self): Updates the count of tokens used in the headers, body, and entire message history.Returns a tuple containing the total tokens used, tokens used in headers, and tokens used in the body.
-
calibrate(self, MAX_TOKEN=None): Calibrates the message history by removing older messages if the total token count exceedsMAX_TOKEN.MAX_TOKEN(int): The maximum number of tokens allowed for the generated completion.
-
add(self, role, content, to_head=False): Adds a message to the message history.role(str): The role of the message (user, assistant, etc.).content(str): The content of the message.to_head(bool): Specifies whether the message should be appended to the headers list. If False, it will be appended to the body list.
Returns the last message in the message history.
-
append(self, content_list): Appends a list of messages to the message history.content_list(list): List of messages to be appended.
-
get_last_message(self): Returns the last message in the message history.Returns the last message as a dictionary containing the role and content of the message.
-
get_token_for_message(self, messages, model_name="gpt-3.5-turbo-0613"): Returns the number of tokens used by a list of messages.
messages(list): List of messages to count tokens for.model_name(str): The name of the OpenAI model used for token encoding.
Returns the number of tokens used by the provided list of messages.