Moniepoint Incorporated is a global business payments and banking platform and recently became QED Investors’ first investment in Africa. We are the partner of choice for over 600,000 businesses of all sizes, powering the dreams of SMBs and providing them with equal access to the tools they need to grow and scale.
We are recruiting to fill the position below:
Job Title: AI / ML Python Engineer
Location: Lagos (Remote)
About the Role
Engineering at Moniepoint is an inspired, customer-focused community, dedicated to crafting solutions that redefine our industry. Our infrastructure runs on some of the cool tools that excite infrastructure engineers - kubernetes, docker etc.
We also make business decisions based on the large stream of data we receive daily, so we work daily with big data, perform data analytics and build models to make sense of the noise and give our customers the best experience.
We are looking for a highly skilled AI Python Engineer with deep expertise in large language models (LLMs), prompt engineering, and applied AI development.
You will design and deploy production-grade AI solutions, integrate language models into core systems, and contribute to the evolution of Moniepoint’s AI-driven products.
Responsibilities
What You'll Be Doing:
Design, develop, and deploy AI-driven applications using Python and modern frameworks.
Craft, refine, and test prompts and multi-step prompt chains for optimal LLM performance across various business contexts.
Integrate and orchestrate LLM APIs (OpenAI, Anthropic, Gemini, HuggingFace, etc.) and open-source models (LLaMA, Mistral, etc.).
Fine-tune and optimize models for domain-specific use cases.
Build evaluation frameworks to measure model accuracy, prompt effectiveness, and output quality.
Work closely with product managers and domain experts to translate business needs into robust AI-driven solutions.
Develop scalable APIs, pipelines, and microservices that integrate AI models into Moniepoint’s backend systems.
Ensure scalability, security, and reliability of AI systems in production environments.
Research and experiment with cutting-edge techniques in embeddings, RAG (retrieval-augmented generation), and generative AI deployment.
Requirements
What We're Looking For:
BS/MS in Computer Science, Engineering, or a related technical field.
5+ years of professional experience as a Python Developer, preferably in AI/ML-focused roles.
Proven hands-on experience with LLM APIs and frameworks such as OpenAI, Anthropic, or HuggingFace.
Strong skills in prompt engineering (system prompts, few-shot, zero-shot, and chain-of-thought techniques).
Experience with LangChain, LangGraph, LlamaIndex, Ollama, or similar orchestration frameworks.
Familiarity with vector databases (Pinecone, Weaviate, FAISS, Redis, PgVector, etc.) and embeddings.
Good understanding of NLP fundamentals, machine learning workflows, and fine-tuning techniques.
Experience building APIs and integrating AI into production systems.
Knowledge of cloud environments (AWS, GCP, Azure) and containerized deployment (Docker, Kubernetes).
Strong problem-solving skills, with an emphasis on experimentation, iteration, and data-driven decision making.
Strong debugging skills and a methodical approach to problem-solving.
Excellent communication skills and a team-first attitude.
Our Tech Stack
As part of our AI Engineering team, you’ll work with modern technologies that enable us to build intelligent and scalable AI-powered systems. Some of the technologies you’ll be hands-on with include:
Python: Core development language for AI and backend systems.
LangChain / LlamaIndex: For LLM orchestration, RAG, and agent workflows.
OpenAI, Anthropic, Gemini APIs: Leveraging state-of-the-art LLMs for business automation and chat intelligence.
Vector Databases (FAISS, Pinecone, Redis): For semantic search and contextual memory.
Docker & Kubernetes: Containerized deployments and orchestration for scalable AI microservices.
Google Cloud Platform: Hosting and managing AI pipelines and APIs in production.
What we can offer you
Culture - We put our people first and prioritize the well-being of every team member. We’ve built a company where all opinions carry weight and where all voices are heard. We value and respect each other and always look out for one another. Above all, we are human.
Learning - We have a learning and development-focused environment with an emphasis on knowledge sharing, training, and regular internal technical talks.
Compensation - You’ll receive an attractive salary, pension, health insurance, paid leave plus other benefits.
What to expect in the hiring process
Our interview process is designed to be thoughtful, transparent, and candidate-friendly, allowing you to showcase your strengths while getting to know us better:
Technical Assessment – A practical take-home assignment focused on LLM integration, prompt design, and AI pipeline implementation.
Initial Conversation – A brief introductory call with our recruiter to learn more about your background and share insights about the role and our AI initiatives.
Deep Dive Technical Interview – A discussion with our AI Engineering team on system design, architecture, and model integration approaches.
Final Interview – A combined technical and behavioural conversation with a member of our Executive Team, focusing on your alignment with our values, vision, and culture.