Roles and Responsibilities
Big data Hadoop/ Spark with Scala or Java or Python
Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business
- Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects
- Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc.
- Design, build, test and deploy streaming pipelines for data processing in real time and at scale
- Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc..
- Experience with object-oriented/object function scripting languages: Scala, Java, etc.
- Develop software systems using test driven development employing CI/CD practices
- Partner with other engineers and team members to develop software that meets business needs
- Follow Agile methodology for software development and technical documentation
- Good to have banking/finance domain knowledge
- Strong written and oral communication, presentation and interpersonal skills.
- Exceptional analytical, conceptual, and problem-solving abilities
- Able to prioritize and execute tasks in a high-pressure environment
- Experience working in a team-oriented, collaborative environment
Role:Full Stack Developer
Salary: Not Disclosed by Recruiter
Industry:Analytics / KPO / Research
Functional AreaEngineering - Software
Role CategorySoftware Development
Employment Type:Full Time, Permanent
CHANGE LEADERS CONSULTING
Organizations. We are passionate career advisers and community builders supporting brands in
Predictive analytics, Consulting and IOT.