At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk.
Your Role and Responsibilities
As Big Data Engineer, you will develop, maintain, evaluate and test... big data solutions. You will be involved in data engineering activities like Creating pipelines / workflows for Source to Target etc.
• You will be involved in the design of data solutions using Hadoop based technologies along with Hadoop, Azure, HDInsights for Cloudera based Data Late using Scala Programming.
• Responsible to Ingest data from files, streams and databases. Process the data with Hadoop, Scala, SQL Database, Spark, ML, IoT
• Develop programs in Scala and Python as part of data cleaning and processing
• Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems
• Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform
• Provide high operational excellence guaranteeing high availability and platform stability
• Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Pyspark, Kafka, any Cloud computing etc.
If you thrive in a dynamic, collaborative workplace, IBM provides an environment where you will be challenged and inspired every single day. And if you relish the freedom to bring creative, thoughtful solutions to the table, there's no limit to what you can accomplish here.
Required Technical and Professional Expertise
• Minimum 5+ years of experience in Big Data technologies
• Proficient in any of the programming languages – Python, Scala or Java
• Mandatory experience in Mid to Expert Level programming capabilities in a large-scale enterprise
• In-depth experience in modern data platform components such as the Hadoop, Hive, Pig, Spark, Python, Scala, etc
• Experience with Distributed Versioning Control environments such as GIT
• Familiarity with development tools - experience on either IntelliJ / Eclipse / VSCode IDE, Build Tool Maven
• Demonstrated experience in modern API platform design including how modern UI’s are built consuming services / APIs.
• Experience on Azure cloud including Data Factory, Databricks, Data Lake Storage is highly preferred
• Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission
Preferred Technical and Professional Expertise
• Good to have atleast one of the Certifications listed here:
• AZ 900 - Azure Fundamentals
• DP 200, DP 201, DP 203, AZ 204 - Data Engineering
• AZ 400 - Devops Certification
• You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
• Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work
• Intuitive individual with an ability to manage change and proven time management
• Proven interpersonal skills while contributing to team effort by accomplishing related results as needed
• Up-to-date technical knowledge by attending educational workshops, reviewing publicationsShow full descriptionCollapse