Readable·Scalable·Pragmatic
Vedro adapts to you, not the other way around. Keep it minimal with simple functions. Add structure with self-documenting steps. Go full control with class-based scenarios. Every test follows the proven Arrange-Act-Assert pattern, just pick your approach.
Start writing tests immediately with zero ceremony. If you know Python, you already know Vedro.
from pathlib import Path as File
from vedro import scenario
@scenario()
def create_file():
# Arrange
file = File('example.txt')
# Act
file.touch()
# Assert
assert file.exists()Begin with minimal style and evolve as your needs change. Add structure when it helps, not because it's required. Mix styles within the same project: simple tests stay simple, complex tests get structure.
Steps automatically appear in test reports and terminal output without plugins or configuration. Code reviews become effortless when tests explain their intent clearly. No more guessing what a test does or why it failed.
Whether you write bare functions, context-managed steps, or class-based methods, every test follows the same AAA flow. Consistent structure means faster onboarding and zero cognitive overhead when switching between tests.
No magic, no DSL, no learning curve. Vedro uses Python's built-in assert statement, the one you already know and love. Write assertions exactly as you would in any Python code.
Compare the simplicity of Vedro with the complexity of other frameworks
assert greeting == 'Hello Bob!'
self.assertEqual(greeting, 'Hello Alice!')
expect(greeting).toEqual('Hello Alice!')
expect(greeting).to.equal('Hello Alice!')
assert error_code not in [400, 500]
self.assertNotIn(error_code, [400, 500])
expect([400, 500]).not.toContain(error_code)
expect([400, 500]).to.not.include(error_code)
assert len(results) >= 10
self.assertGreaterEqual(len(results), 10)
expect(results.length).toBeGreaterThanOrEqual(10)
expect(results).to.have.length.of.at.least(10)When tests fail, every second counts. Vedro's rich terminal output shows exactly what went wrong: clean color-coded diffs, relevant stack traces, and step-by-step context. Spend time fixing bugs, not deciphering error messages.
Scenarios
*
✗ build active users query (0.01s)
✔ given a query builder (0.00s)
✔ when building active user query (0.01s)
✗ then it should match expected SQL (0.00s)
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /app/tests/build_user_queries.py:27 in build_active_users_query │
│ │
│ 24 │
│ 25 query = user_query_builder.active_only().build() │
│ 26 │
│ ❱ 27 assert query == """SELECT u.id, u.name, u.email, u.created_at, u │
│ 28 │
╰──────────────────────────────────────────────────────────────────────────────╯
AssertionError
>>> assert actual == expected
- "SELECT u.id, u.name, u.email, u.created_at, u.updated_at FROM users u LEFT JOIN preferences p ON u.id = p.user
+ "SELECT u.id, u.name, email, u.created_at, u.updated_at FROM users u LEFT JOIN preferences p ON u.id = p.user_i
# --seed 9bcc2e2b-9281-4537-9e10-e7f6244c7d0e
# 1 scenario, 0 passed, 1 failed, 0 skipped (0.01s)
See exactly what changed with character-level highlighting. No more squinting at walls of text to spot the difference between expected and actual values.
Get just the right amount of information. Vedro shows you the failing assertion with surrounding context, not overwhelming stack dumps.
Watch your test execution flow with clear step names and timing. Know exactly where things went wrong and how long each step took.
Monitor test execution with live progress indicators. See which tests are running, passing, or failing as it happens, not after everything completes.
Everything in Vedro is a plugin, even core features. This isn't just extensibility, it's a fundamental design principle that gives you complete control over your testing experience.
Hook into any test lifecycle event with a simple API
from vedro.core import Dispatcher, Plugin
from vedro.events import CleanupEvent
class SlackNotifierPlugin(Plugin):
def subscribe(self, dispatcher: Dispatcher):
dispatcher.listen(CleanupEvent, self.on_cleanup)
def on_cleanup(self, event: CleanupEvent):
if event.report.failed > 0:
notify_slack(f"❌ {event.report.failed} tests failed!")Join 1,000+ developers who've made their testing experience enjoyable again.