How To Make Chatgpt Follow Instructions From Previous Commands

How To Articles

Do you ever desire for your AI chatbot to retain the directions you provided in previous commands? Fortunately, you’re in for a treat! This article will demonstrate how to enable ChatGPT to adhere to instructions from prior commands, enhancing the conversation with a personalized touch and commentary. We’ll delve into the technical aspects to provide you with a comprehensive understanding of its functioning.

Introducing Previous Command Memory

By default, language models like ChatGPT don’t have a built-in memory of previous commands. However, with a little bit of coding magic, we can enhance the model’s capability to remember instructions and use that knowledge to provide more coherent responses.

Building on top of OpenAI’s library, we can utilize the message-passing feature to maintain a state of conversation history. The conversation state is represented as a list of messages, where each message has ‘role’ (‘system’, ‘user’, or ‘assistant’) and ‘content’ (the text of the message).

To make ChatGPT remember and follow instructions from previous commands, we need to append the previous user instructions to the conversation history. This way, the model has access to the context of the conversation and can generate responses accordingly.

Implementing the Code

Let’s take a look at the code snippet below to see how we can implement previous command memory in ChatGPT:

from transformers import OpenAIGPTLMHeadModel, GPT2Tokenizer

tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = OpenAIGPTLMHeadModel.from_pretrained('openai-gpt')

# Conversation history
conversation = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Tell me a joke.'}

# Append user instructions to conversation history
user_input = "Make me laugh."
conversation.append({'role': 'user', 'content': user_input})

# Generate response
input_ids = [tokenizer.encode(x['content']) for x in conversation]
input_ids = sum(input_ids, [])
response = model.generate(torch.tensor([input_ids]))
response_text = tokenizer.decode(response[0], skip_special_tokens=True)


First, we import the necessary modules and load the pre-trained GPT model and tokenizer. Then, we set up the conversation history as a list of messages, starting with a system message and a user message.

To make ChatGPT follow instructions from previous commands, we simply append the user instruction to the conversation history. This ensures that the model has access to the previous instructions when generating a response.

Finally, we generate a response using the conversation history and the pre-trained model. The response is decoded using the tokenizer to obtain the text output.


With the addition of previous command memory, we can make ChatGPT follow instructions from previous commands effectively. By appending the user instructions to the conversation history, we give the model access to the context and enable it to generate more coherent and context-aware responses.

Remember to experiment with different conversation setups and test the model’s ability to follow instructions. Happy coding!