Responsibilities
- Develop, test, and deploy Generative AI models using LangChain, AWS Bedrock, and other AI frameworks.
- Leverage Vector Databases (e.g., AlloyDB, Aurora PostgreSQL DB) to enhance AI search efficiency, knowledge retrieval, and retrieval-augmented generation (RAG) for improved contextual responses.
- Design effective prompts to optimize AI-generated outcomes through prompt engineering.
- Implement search and retrieval techniques, including code ingestion and knowledge retrieval to enhance AI performance.
- Debug AI models and cloud-based solutions independently, particularly within AWS environments.
- Build and integrate frontend applications using Streamlit and backend services using Python to create end-to-end AI applications.
- Collaborate with cross-functional teams to test AI-driven solutions, gather feedback, and enhance system capabilities.
- Communicate complex AI concepts effectively to non-technical stakeholders, ensuring clarity in AI adoption and implementation.
- Actively contribute to team discussions, providing insights and proposing solutions to AI-related challenges.
Requirements
- Diploma / Degree in Computer Engineering with 1 year of relevant experience
- Proficiency in Python – Strong programming skills, especially in AI/ML model development, API integration, and cloud-based deployment.
- Prompt Engineering Expertise – Ability to craft, test, and optimize AI prompts to improve LLM (Large Language Model) performance.
- Experience with Vector Databases (e.g. AlloyDB, Aurora PostgreSQL DB) – Knowledge of retrieval-augmented generation (RAG) techniques and efficient AI memory storage.
Shortlisted candidates will be offered a 1 Year Agency Contract employment