Architecture
Agent Integration
How the Agno Python agents integrate with the NestJS API Gateway and Vert.x WebSocket edge.
The platform uses two Agno Python agents: the Planner Agent for skill validation and interview plan generation, and the Interview + Assessor Agent for conducting live interviews and generating post-interview assessments. Both communicate with the API Gateway via Kafka events.
Agent Architecture
| Agent | Trigger | Output | Kafka |
|---|---|---|---|
| Planner Agent | skill.validation.requested | Interview plan + greeting_script + inmail_draft | Publishes: plan.generated |
| Interview Agent | WebSocket connection from Vert.x | Real-time audio responses | Reads session state from Redis |
| Assessor Agent | interview.completed | Structured assessment JSON | Publishes: assessment.completed |
Planner Agent
The Planner Agent consumes skill.validation.requested events, validates each skill via pgvector similarity search, generates a structured interview plan using Claude or GPT-4, and publishes the result back to plan.generated.
Kafka: skill.validation.requested
(includes: position, level, skills, duration,
company_name, company_description, job_description)
│
▼
Agno Planner Agent (Python)
├── pgvector similarity search (PostgreSQL)
│ └── validate: ["TypeScript", "React", ...] → matched skill IDs
├── Build prompt with job description + company context + validated skills
├── LLM call (Vertex AI Gemini Pro)
│ └── Generate structured interview plan JSON including:
│ • competencies + rubrics
│ • questions with follow-ups
│ • greeting_script ← Maya's spoken opening for the interview
│ • inmail_draft ← LinkedIn InMail with {{INTERVIEW_LINK}} placeholder
└── Publish: plan.generated → Kafka
│
▼
API Gateway: save plan (greeting_script + inmail_draft stored in planData)
API Gateway: update state → PENDING
Webhook: interview.plan_generated → callbackUrl# From apps/agno-agents/
pip install -r requirements.txt
# Set environment
export AGNO_API_KEY=your-agno-key
export OPENAI_API_KEY=your-openai-key # or ANTHROPIC_API_KEY
export DATABASE_URL=postgresql://...
export KAFKA_BROKER=localhost:9092
# Start agent
python -m planner.mainInterview Agent
The Interview Agent runs during live sessions. Audio flows from the candidate browser to the Vert.x WebSocket edge, which forwards chunks to the Interview Agent. The agent transcribes speech, generates contextual responses, and streams TTS audio back to the candidate.
Candidate Browser (WebRTC/WebSocket)
│
├── 16kHz PCM audio chunks → Vert.x Edge (ws://localhost:8080)
│
▼
Vert.x WebSocket Edge
├── Buffers audio
├── Forwards to Interview Agent (HTTP/gRPC)
└── Returns TTS audio to browser
│
▼
Agno Interview Agent (Python)
├── Whisper / Deepgram transcription
├── Claude / GPT-4 response generation (using interview plan)
└── TTS synthesis (ElevenLabs / OpenAI TTS)| Component | Port | Protocol |
|---|---|---|
| Vert.x WebSocket Edge | 8080 | WebSocket (ws://) |
| Agno Interview Agent | 8000 | HTTP REST + Kafka |
| API Gateway | 3009 | HTTP REST + Kafka |
Assessor Agent
After the interview session ends, the Vert.x edge publishes interview.completed. The Assessor Agent reads the transcript, scoring rubric, and interview plan, then produces a structured assessment with per-skill scores.
{
"overallScore": 82,
"recommendation": "STRONG_HIRE",
"summary": "Jane demonstrated exceptional TypeScript skills.",
"scores": {
"technical": {
"typescript_proficiency": 4,
"system_design": 4,
"problem_solving": 5
},
"communication": {
"clarity": 4,
"structure": 4
}
},
"highlights": ["Strong system design thinking", "Clear communication"],
"concerns": ["Limited experience with distributed tracing"],
"transcript": "..."
}Health Check
# Check Agno agent health
curl http://localhost:8000/health{
"status": "ok",
"agents": {
"planner": "ready",
"interview": "ready",
"assessor": "ready"
},
"version": "0.4.0"
}skill.validation.requested after info is complete — the INFO_NEEDED state is invisible to agents.