Debug Prompt
Auto‑generates AI‑friendly debug prompts for failed Vedro scenarios, helping you quickly spot root causes and apply the smallest possible fix.
✨ Features
- Zero‑config. Enable the plugin and you’re done.
- Automatic prompt generation. On every failed scenario a Markdown file is produced with steps, source, error message & traceback.
- LLM‑ready. A built‑in system prompt instructs ChatGPT (or any LLM) to analyse, locate the bug and suggest a minimal patch.
🛠 Installation
- Quick
- Manual
For a one‑liner install via Vedro’s plugin manager:
$ vedro plugin install vedro-debug-prompt
Prefer manual steps? No problem:
- Install the package:
$ pip install vedro-debug-prompt
- Enable the plugin in
vedro.cfg.py
:
import vedro
import vedro_debug_prompt
class Config(vedro.Config):
class Plugins(vedro.Config.Plugins):
class DebugPrompt(vedro_debug_prompt.DebugPrompt):
enabled = True
🚀 Usage
Run your tests as usual:
$ vedro run
Example output when a scenario fails:
Scenarios
*
✗ decode base64 encoded string
|> AI Debug Prompt: .vedro/tmp/prompt_liywiyo1.md
✔ given_encoded_string
✔ when_user_decodes_string
✗ then_it_should_return_decoded_string
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /scenarios/decode_base64_str.py:20 in then_it_should_return │
│ │
│ 17 self.result = b64decode(self.encoded) │
│ 18 │
│ 19 def then_it_should_return_decoded_string(self): │
│ ❱ 20 assert self.result == "banana" │
│ 21 │
╰──────────────────────────────────────────────────────────────────────────────╯
AssertionError
>>> assert actual == expected
- 'banana'
+ b'banana'
# --seed 280dc986-e618-4102-aa53-056c2876c00e
# 1 scenario, 0 passed, 1 failed, 0 skipped (0.00s)
Open the referenced prompt_*.md
file and paste its contents into ChatGPT.
In most terminals you can ⌘‑click (or Ctrl‑click on Linux/Windows) the file path to open it instantly
A typical response might look like:
Root cause The call to
b64decode
returns abytes
object (b'banana'
), but the test asserts against a Pythonstr
("banana"
), so the types (and values) don’t match.Suggested fix (code)
Apply this minimal patch inscenarios/decode_base64_str.py
:def when_user_decodes_string(self):
- self.result = b64decode(self.encoded)
+ self.result = b64decode(self.encoded).decode('utf-8')
- Why this works
By calling.decode('utf-8')
on thebytes
result, you convert it to astr
("banana"
), which matches the test’s expected value.
⚙️ Custom Prompts
Need a different tone, extra context, or even a completely new set of instructions for the LLM? You can swap out the built‑in PromptBuilder
for your own implementation and override just the parts you care about. The most common tweak is changing the system prompt.
Example: overriding the system prompt
# vedro.cfg.py
import vedro
import vedro_debug_prompt
class MyPromptBuilder(vedro_debug_prompt.PromptBuilder):
def _get_system_prompt(self) -> str:
# Tell the LLM exactly how you want it to behave
return "<system prompt>"
class Config(vedro.Config):
class Plugins(vedro.Config.Plugins):
class DebugPrompt(vedro_debug_prompt.DebugPrompt):
enabled = True
prompt_builder = MyPromptBuilder() # ← use the customised builder
That’s it! The DebugPrompt plugin will now use your custom system prompt whenever it generates a prompt.