Prompt Engineering Whitepaper by Google

Google just published a guide focused on Prompt Engineering, and it goes deep on structure, formatting, config settings, and real examples. Focused on Gemini but works with any LLM. Here’s what it covers:

  1. How to get predictable, reliable output using temperature, top-p, and top-k
  2. Prompting techniques for APIs, including system prompts, chain-of-thought, and ReAct (i.e., reason and act)
  3. How to write prompts that return structured outputs like JSON or specific formats

Link: Prompt Engineering Whitepaper (Google, 2025)

Thanks, Oleg!