Posted on Mon 30th Mar, 2026 - hotnigerianjobs.com --- (0 comments)
Jedayah AI is an AI automation agency that designs and deploys intelligent AI solutions for businesses in Africa. We build production-grade solutions and help organizations unlock the value of artificial intelligence by designing and implementing AI automation and agentic systems that solve real business problems.
You will work at the intersection of LLMs, workflow automation, AI agents, and backend systems to create scalable, production-ready solutions for our clients.
This role requires strong engineering fundamentals, practical experience with modern AI stacks, and the ability to translate business problems into robust AI-powered systems, from initial solution architecture through to post-delivery monitoring.
Analyze client requirements and translate them into end-to-end AI solution designs
Decide the right architecture for each problem and document the rationale
Design multi-agent systems whereappropriate
Produce clear solution design documents
Define human-in-the-loop checkpoints, confidence thresholds, and escalation paths as first-class design elements,not after thoughts
Evaluate build vs. buy for everycomponent
Automation & Agent Build:
Build production-grade agentic workflows in n8n for automation-heavy pipelines
Develop stateful, multi-step AI agents using LangGraph, managing agent state, conditional branching, tool-use loops, and cycle detection for complex reasoning workflows
Build RAG (Retrieval-Augmented Generation) pipelines, document loading, chunking strategies, embedding generation, vector store ingestion, and retrieval chains
Implement LLM-powered classification, extraction, summarization, and generation using the LLMs and open-source models depending on the use case
Fine-tune open-source LLMs using LoRA/QLoRAon domain-specific datasets where a base model is insufficient
Build and deploy conversational agents over selected channel via Termii/Twilio
Engineer prompts precisely and systematically
Implement tool-use and function-calling patterns that allow LLMs to interact with external APIs, databases, and computation functions safely and reliably
Vector Databases & Knowledge Management:
Design and manage vector store schemas for RAG applications, choosing the right embedding model, chunk size, overlap strategy, and metadata structure for each knowledge base
Work with vector databases selecting the right store based on latency, scale, and infrastructure requirements
Build document ingestion pipelines that process PDFs, DOCX files, web pages, and structured data into clean, searchable vector stores with accurate metadata tagging
Implement hybrid search strategies combining dense vector similarity with sparse keyword search (BM25) for improved retrieval precision on domain-specific corpora
Manage embedding model selection and versioning, understanding the trade-offs between OpenAI text-embedding-3, Cohere embed, and open-source alternatives (BGE, E5, Nomic)
Build re-ranking layers using cross-encoders or LLM-based relevance scoring to improve retrieval quality before LLM context injection
Monitor retrieval quality in production: track retrieval precision, response, and hallucination rates, and iterate on chunking and retrieval strategies accordingly
API Integrations & Configuration:
Integrate third-party APIs across the Jedayah AI stack and client-specific enterprise systems
Configure and maintain authentication patterns across all integrations: OAuth2, API keys, bearer tokens, webhook signature verification, and service account credentials
Design and implement webhook-driven pipelines:inbound event handling, idempotency, replay safety, and out-of-order delivery handling
Set up Supabase schemas, RLS (Row Level Security) policies, storage buckets, edge functions, and real-time subscriptions to support multi-tenant AI application data
Manage environment configurations across development, staging, and production,secrets management, credential rotation, and environment variable governance
Write and maintain API documentation for internal AI services and any external-facing APIs exposed to clients or integration partners
Evaluate and onboard new API vendors, LLM providers, and integration platforms as client solution requirements evolve
Open-Source Models & Self-Hosting:
Select and deploy open-source LLMs appropriate to the task,based on capability, context length, and inference cost
Deploy and serve open-source models using vLLM, Ollama, or Hugging Face Text Generation Inference (TGI) on cloud GPU instances (AWS,GCP,RunPod) or local infrastructure
Fine-tune domain-specific models using Parameter-Efficient Fine-Tuning (PEFT) methods, LoRA, QLoRA, adapters on annotated client datasets for classification, extraction, and generation tasks
Manage model quantization (GGUF, GPTQ, AWQ) to optimize inference cost and latency without significant quality degradation
Track and compare model performance across versions using MLflowor Weights & Biases, evaluation metrics, prompt sensitivity, and output quality benchmarks
Make informed model selection decisions
Internal Quality Control & Testing:
Build test suites for every AI workflow before client delivery, covering happy paths, edge cases, low-confidence routing, malformed inputs, API failure states, and retry behavior
Implement LLM evaluation frameworks to benchmark classification accuracy, extraction precision, hallucination rates, and response on labelled datasets before deployment
Test all file and document outputs against specification requirements to ensure zero format rejections
Build regression test sets for every deployed agent,run on every update to catch prompt sensitivity and model version drift before it reaches production
Document failure modes, confidence thresholds, known edge cases, and manual override procedures for every deployed system
Conduct structured UAT (User Acceptance Testing) with clients before go-live,capture formal sign-off and track open issues to resolution
Implement A/B testing infrastructure for prompt variants and retrieval strategies to enable data-driven optimization post-launch
Post-Delivery Monitoring:
Set up observability for all live AI systems: workflow error rates, LLM API latency and cost, token consumption per task, retrieval hit rates, and end-to-end task completion rates
Build centralized error logging pipelines,all workflow failures written to Supabase with full context (input, state, error, timestamp), with Teams alerts to the ops team
Monitor confidence score distributions on live classification pipelines to detect and investigate distribution shift that may indicate model or data drift
Use LLM observability tools (LangSmith,Helicone, orLangfuse) to trace agent runs, inspect intermediate steps, andidentifyfailure patterns in production
Respond to client-reported issues,diagnose root causes from workflow logs and agent traces, deploy fixes within agreed SLAs, and communicate clearly with clients throughout
Produce monthly performance reports per client: tasks processed, errors caught, confidence distributions, uptime, cost per task, and recommended optimizations
Proactively identify degradation in model or retrieval quality before clients notice, review evals regularly and propose retraining or prompt updates as needed
Tool & Vendor Management:
Manage the engineering stack
Track LLM provider updates, model releases, deprecations, pricing changes, context window expansions, and new capabilities and assess impact on live systems proactively
Evaluate emerging agentic frameworks, vector store options, and automation platforms as they mature, recommend adoption with clear rationale and migration plans
Manage API usage budgets across all LLM and infrastructure vendors: monitor spend, flag anomalies, optimize model selection and caching strategies to control costs
Maintain a secure secrets and credentials management system across all environments: API keys, service accounts, OAuth tokens, and database credentials
Liaise directly with vendor support and developer relations teams for integration issues,early accessto new features, and technical escalations
Maintain an internal tool registry for every tool in the stack, document its purpose, owner, cost, alternative options, and replacement plan in the event of deprecation or failure
Requirements
Technical Skills:
Degree or certification in Computer Science, Software Engineering, Information Technology, or a related field
3+ years of software engineering experience, with at least 2 years building production AI or LLM-powered systems
Strong proficiency in Python and/or JavaScript (Node.js) async programming, API integration, data transformation, and working with AI SDKs
Experience building with LLMs: prompting, tool use / function calling, structured outputs, and streaming responses
Hands-on experience with LangChain or LlamaIndex, chains, agents, retrievers, memory, and tool integrations
Hands-on experience with LangGraph, stateful agent design, node/edge graphs, conditional routing, and multi-agent orchestration
Working experience with vector databases: Pinecone, Weaviate, pgvector, Chroma, or FAISS
Experience with REST APIs, webhooks, and system integrations, authentication, retry logic, and rate limit handling
Familiarity with automation platforms: n8n, Make, or equivalent
AI / ML Knowledge:
Solid understanding of RAG architectures ,chunking strategies, embedding models, retrieval methods, and re-ranking
Working knowledge of embeddings, semantic search, and vector similarity including trade-offs between embedding models
Understanding of LLM limitations: hallucinations, context window constraints, prompt sensitivity, and output non-determinism
Experience evaluating and improving AI system performance, offline evals, production monitoring, and iterative optimization
Exposure to open-source LLMs (Llama 3, Mistral, Qwen) and self-hosting inference (Ollama,vLLM, or TGI)
Infrastructure & DevOps:
Experience deploying on cloud platforms: AWS, GCP, or Azure
Working knowledge of Docker and CI/CD pipelines for AI workload deployment
Familiarity with monitoring and observability tools such as, LangSmith, Langfuse, Helicone, or Arize Phoenix
Experience with Supabaseor PostgreSQL for application data and pgvector for embedding storage
Nice to Have:
Experience with agent frameworks: AutoGen, CrewAI, or custom multi-agent orchestration patterns
Experience with workflow orchestration tools: Temporal, Celery, or BullMQ for long running async processes
Familiarity with model fine-tuning: LoRA/QLoRA, dataset curation, and evaluation of fine-tuned outputs
Experience with hybrid search: combining dense vector retrieval with BM25 / sparse retrieval and cross-encoder re-ranking
Experience withTermii/Twilio WhatsApp Business API for conversational agent delivery
Background in building SaaS products, internal tools, or multi-tenant applications with data isolation
Experience working in a startup or agency environment with comfortable context-switching across multiple client projects
Understanding of security, rate limiting, secrets management, and production reliability practices
Contributions to open-source AI agent frameworks, LLM tooling, or automation projects
Soft Skills:
Strong problem-solving and system design instincts, you think in architectures, not just code
Ability to work in a fast-paced, client-facing environment where requirements evolve quickly
Clear written and verbal communication, comfortable presenting technical decisions to non-technical stakeholders
Ownership mindset,you treat deployed systems as yours, not tickets you closed
Attention to detail especially on AI outputs, prompt behavior, and edge case handling where errors have real consequences
Proactive about learning, the AI landscape changes weekly, and you keep up without being told to
What We Offer
Competitive salary
Hybrid/Remote— based in Lagos, NG
Direct exposure to real client problems across, fintech, document automation, and business intelligence
Access to premium LLM APIs, GPU compute credits, and a dedicated learning budget
A fast-moving team where your decisions ship to production quickly and your impact is directly visible
Work on cutting-edge AI agents and real-world automation problems in Nigerian and African market contexts
High ownership and direct impact across multiple client projects, no narrow task work, no bureaucracy
Opportunity to shape the architecture of next-generation AI-powered businesses from early stage
Clear progression path, this role grows as Jedayah AI scales
Learning budget for certifications, courses, conferences, and AI tooling.
A culture that values curiosity, outcomes, and moving fast over bureaucracy.
Application Closing Date
20th April, 2026.
How to Apply
Interested and qualified candidates should send theirCV, a link to an AI agent, RAG pipeline, or automation system you have builtto:careers@jedayahai.comusing the Job Title as the subject of the email.