17 Apr
|
Mercor Ai
|
Edmonton
17 Apr
Mercor Ai
Edmonton
Apply on Kit Job: kitjob.ca/job/2fscae
Step into a lead evaluator position focused on enhancing code quality in a multi-language landscape. Utilize your extensive software engineering experience to assess LLM-generated responses with precision and clarity.
In this role, you will leverage your 5+ years of experience in software engineering and expertise in languages such as C++ and Ruby. You will be responsible for validating code outputs, fact-checking, and providing quality annotations of responses, ensuring they meet desired evaluation standards.
Key Responsibilities:
• Evaluate coding responses for clarity and logic
• Conduct fact-checking using reliable references
• Execute code to validate and test outputs
• Annotate areas of improvement in model responses
• Uphold evaluation standards through transparent guidelines
Requirements:
• BS, MS, or PhD in Computer Science or equivalent
• 5+ years of experience in technical roles
• Proficient in multiple programming languages
• Experience using LLMs for coding tasks
• Strong detail orientation for complex evaluations
Leverage your technical expertise to enhance code evaluation standards and contribute significantly to software engineering insights.
#J-18808-Ljbffr
Apply on Kit Job: kitjob.ca/job/2fscae
📌 Lead Evaluator for Software Engineering Responses in Multiple Languages (Edmonton)
🏢 Mercor Ai
📍 Edmonton