AI Automated Integration Test Generation

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Automated Integration Test Generation
Medium
from 1 business day to 3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1214
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

AI Auto-Generation of Integration Tests

Integration tests verify component interactions: service A calls service B, result is written to database, event goes to queue. Writing such tests is harder than unit tests — you need to understand the dependency graph, configure test environments, set up fixtures. An AI generator analyzes service architecture, database schema, and OpenAPI specs, creating tests for critical integration paths.

Dependency Analysis and Test Generation

from langchain_openai import ChatOpenAI
import json
from pathlib import Path
import yaml

class IntegrationTestGenerator:
    INTEGRATION_PROMPT = """Create integration test for component interaction.

Components and their contracts:
{components}

Database schema:
{db_schema}

Integration scenario:
{scenario}

Requirements:
- pytest + SQLAlchemy for DB operations (use transactions with rollback)
- httpx.AsyncClient for HTTP calls
- pytest-asyncio for async tests
- Test database via pytest fixture (not production!)
- Check not only HTTP status but also DB state after operation
- Use factory_boy or pytest-factoryboy for test data

Return complete test code with fixtures."""

    def __init__(self):
        self.llm = ChatOpenAI(model="gpt-4o", temperature=0.1)

    def generate_db_integration_tests(
        self,
        model_code: str,
        repository_code: str,
        db_schema: str
    ) -> str:
        prompt = f"""Create pytest integration tests for Repository and Model.

SQLAlchemy Model:
```python
{model_code}

Repository:

{repository_code}

DDL schema:

{db_schema}

Create tests:

  1. CRUD operations (create, read, update, delete)
  2. Filtering and sorting
  3. Transactions (successful commit, rollback on error)
  4. Unique constraints (attempt to insert duplicate)
  5. Foreign key constraints
  6. Pagination (if exists in Repository)

Fixtures:

  • db_session: SQLAlchemy session with rollback after each test
  • test_data factories via factory_boy

Return test code.""" return self.llm.invoke(prompt).content

def generate_service_integration_tests(
    self,
    service_a_spec: dict,
    service_b_spec: dict,
    interaction_patterns: list[str]
) -> str:
    """Generates tests for inter-service interaction"""
    prompt = f"""Create pytest integration tests for service interaction.

Service A (client):

  • Base URL: {service_a_spec['base_url']}
  • Calls: {json.dumps(service_a_spec['calls'])}

Service B (server):

  • Endpoints: {json.dumps(service_b_spec.get('endpoints', []))}

Interaction patterns: {chr(10).join(f"- {p}" for p in interaction_patterns)}

Use:

  • pytest + respx for mocking Service B HTTP responses
  • Tests for retry logic (what happens on Service B timeout)
  • Tests for circuit breaker (if exists)
  • Tests for correct 4xx/5xx handling from Service B

Return test code with fixtures.""" return self.llm.invoke(prompt).content


### Test Data and Fixtures Generation

```python
class TestDataGenerator:
    FACTORY_PROMPT = """Create factory_boy factories for models.

SQLAlchemy models:
{models_code}

Create:
1. Factory for each model
2. SubFactory for related objects
3. Trait for specific states (e.g. expired_user, admin_user)
4. Batch creation via factory.build_batch

Return factories code."""

    async def generate_factories(self, models_code: str) -> str:
        result = await self.llm.ainvoke(
            self.FACTORY_PROMPT.format(models_code=models_code)
        )
        return result.content

    async def generate_fixtures_from_schema(self, schema: dict) -> str:
        """Creates pytest fixtures from DB schema"""
        prompt = f"""Create pytest fixtures for test database.

Schema: {json.dumps(schema, indent=2)}

Needed fixtures:
- engine: SQLAlchemy engine to test DB (PostgreSQL via pytest-postgresql)
- db_session: session with rollback after each test
- Fixtures for each table: minimal valid object
- seeded_db: database with initial data for e2e tests

Use scope='function' for db_session, scope='session' for engine.
Return Python code."""
        return (await self.llm.ainvoke(prompt)).content

Tests for Message Queues

    QUEUE_INTEGRATION_PROMPT = """Create test for message queue integration.

Producer code:
{producer_code}

Consumer code:
{consumer_code}

Use:
- pytest + testcontainers-python (RabbitMQ/Kafka in Docker)
- Check: message sent → processed → result in DB
- Test for dead letter queue (unprocessed message → DLQ)
- Test for idempotency (reprocessing same message)
- Processing wait timeout: asyncio.wait_for with timeout=10

Return complete test code."""

Case study: e-commerce platform, 15 microservices. Problem: integration testing was manual before each release (2 QA × 3 days). Generated 180 integration tests for critical paths: order → payment → notification → warehouse update. Of 180 tests on first run: 23 failed — found real bugs in currency conversion handling, event queue duplication, incorrect status on partial payment.

Timeframe: integration tests for DB + services: 3–5 weeks; with queues and full environment via testcontainers: 5–7 weeks.