Part III: Working with LLMs
Prompting is programming with natural language. Every interaction with a large language model begins with a prompt, and the quality of that prompt determines the quality of the output. Yet most practitioners treat prompt engineering as an ad hoc trial-and-error process rather than a systematic discipline. This chapter changes that by presenting prompt engineering as a structured craft with well-defined techniques, measurable outcomes, and principled optimization strategies.
We begin with the foundational techniques: zero-shot and few-shot prompting, role assignment, system prompt design, and template construction. Next, we explore reasoning strategies that unlock the model's ability to solve complex problems: chain-of-thought prompting, self-consistency, tree-of-thought exploration, and the ReAct framework that interleaves reasoning with action. The third section covers advanced patterns including self-reflection loops, meta-prompting, prompt chaining, and automated prompt optimization with DSPy. Finally, we address the critical topics of prompt security and optimization: injection attacks, defense strategies, structured output enforcement, prompt compression, and systematic testing.
By the end of this module, you will have a practical toolkit for designing, composing, and securing prompts across a wide range of applications, from simple classification tasks to complex multi-step reasoning pipelines.