Work collaboratively to design, integrate, and evaluate AI Coding Assistants powered by LLMs (such as GPT, Claude, or custom models) with the goal of automating code generation, code review, testing, and enhancing developer productivity
Design, implement, and benchmark agentic AI workflows where autonomous software agents perform multi-step reasoning, orchestration, and interaction with APIs, databases, and cloud services
Experiment with frameworks like LangChain, AutoGen, CrewAI, or similar tools for LLM orchestration and automating agent behaviors
Develop and refine prompt engineering strategies to optimize LLM performance for specific coding, review, testing, documentation, or workflow automation tasks
Learn how to differentiate between AI-generated content and real, valuable, and accurate information
Support R&D and deployment of retrieval-augmented generation (RAG) pipelines, knowledge base integration, and multi-agent systems
Document design decisions, experimental results, and best practices for the integration and evaluation