Participate in the development and integration of advanced large language model (LLM) projects;
Design and implement model research, fine-tuning, or deployment pipelines based on Transformer architecture;
Build agentic AI systems that combine memory, tools, APIs, and reasoning to solve real-world tasks;
Work closely with product and engineering teams to deploy AI applications in enterprise environments;
Stay updated with the latest advances in foundation models (e.g., OpenAI, Gemini, Deepseek, Anthropic), and help translate research into product innovation.
Requirements
Education & Background
Master's or Ph.D. degree in Computer Science, AI, or a related technical field;
Fresh graduates from NUS, NTU, SMU or equivalent top institutions are encouraged to apply;
EP sponsorship is available for qualified overseas candidates.
Technical Skills
Solid understanding of Transformer-based LLM architectures, fine-tuning, or retrieval-augmented generation (RAG);
Experience with AI frameworks such as PyTorch or TensorFlow ;
Proficient in programming languages such as Python (preferred) or Java ;
Familiarity with AI agent frameworks (e.g., LangChain) and tools such as Cursor, WindSurf, etc.;
Experience with cloud platforms like AWS or Azure is a plus.
Other Qualities
Strong analytical and communication skills;
Self-motivated with a passion for innovation and emerging technologies;
Ability to work independently and deliver high-quality code and ideas.