Prompt Engineering for LLM Cot
Some thoughts on Prompt Optimization and Instructional Design for LLM CoT:
F(Instruction) = P(Chain of Thought) # instruction and rules are designed for LLM CoT
F(CoT) = P(thinking | Instruction) # CoT is the chain of thought, the thinking process of the model
F(Cot) = F(thinking) = P(answer) # After sorted CoT process based on the instruction, the final answer is generated.
Instruction complexity will affect the LLM understand the task, concise one gets better performance.
Divide and conquer is also a good method to optimize the CoT and workflow. Chain of multiple tasks instead of one complex task might be a better idea. Supported by Chain complex prompts for stronger performance
Reference
- Chain of Thought Prompting Elicits Reasoning in Large Language Models
- Llama 3: How to Prompt LLMs
- Jailbreak Prompt
- Anthropic Metaprompt
- Claude: Be Clear and Direct
- Claude: Prompt Engineering Interactive Tutorial
TODO:
- Read all the reference materials.
- Chain of Thought Prompting Elicits Reasoning in Large Language Models
- Llama 3: How to Prompt LLMs
- Jailbreak Prompt
- Anthropic Metaprompt
- Claude: Be Clear and Direct
- Claude: Prompt Engineering Interactive Tutorial