
About the Role
Key Responsibilities
Develop and execute robust testing strategies for Conversational AI solutions, including conversational flow testing, NLP and API testing, functional testing, regression testing, integration testing, performance testing, and user experience validation.
Analyze and utilize conversational data (chats, emails, and calls) for testing and training AI systems.
Perform performance, load, and stress testing to evaluate platform scalability and integration readiness.
Collaborate with cloud platform services (AWS, Azure, Google Cloud, Omilia, Kore.ai, NICE, Salesforce, etc.) to design, build, and test Conversational AI solutions.
Ensure AI solutions adhere to high standards of TTS, STT, SSML modeling, Intent Analytics, OmniChannel AI, IVR, Intelligent Agent Assist, and Contact Center as a Service (CCaaS).
Leverage expertise in sentiment analysis, topic modeling, and text classification to validate and improve system performance.
Automate testing processes using tools and frameworks such as Cyara, Botium, Postman, Qmetry, Selenium, IBM Watson Assistant Toolkits, and Cognigy Test Case Builder.
Troubleshoot and resolve testing-related issues efficiently and effectively.
Verify system performance across diverse devices and environments.
Technical Requirements
Proficiency in JavaScript with additional knowledge of Python, PySpark, R, or SQL as a bonus.
Hands-on experience with cloud platforms and their Conversational AI services.
Familiarity with diverse testing methods, including automation frameworks and tools.
Expertise in performance evaluation metrics and integration assessments.
Preferred Qualifications
Proficiency in data visualization tools like Tableau, Power BI, or QuickSight.
Experience in survey analytics or organizational functions such as pricing, sales, marketing, operations, and customer insights.
Requirements
Candidates with 4+ years of hands on experience
Conversational AI Testing: Expertise in NLU-based methodologies, conversational flow, NLP, API, regression, integration, performance, and user experience testing.
Data Understanding: Conversational data analysis (chats, emails, calls) for training and testing Conversational AI systems.
Performance Evaluation: Load, stress testing for platform integration and evaluation.
Cloud Platforms: Knowledge of AWS, Azure, Google Cloud, Kore.ai, NICE, Salesforce, etc.
NLP/NLU Expertise: TTS, STT, SSML modeling, Intent Analytics, OmniChannel AI, IVR, Intelligent Agent Assist, CCaaS.
Testing Automation Tools: Proficiency in Cyara, Botium, Postman, Selenium, Qmetry, IBM Watson Toolkit, Cognigy Test Builder, etc.
Programming: Strong in JavaScript (plus Python/PySpark/R/SQL as a bonus).
Sentiment & Text Analysis: Familiarity with sentiment analysis, topic modeling, and text classification.
Testing Devices: Diverse device testing for Conversational AI systems.
Visualization Tools (Bonus): Tableau, Power BI, Quicksight.