At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk.
Your Role and Responsibilities
Be involved in data engineering activities like Creating pipelines... / workflows for Source to Target etc.
• You will be involved in the Azure Data platform implementation. Willing to work in shift timings 2PM to 11PM
• Responsible for assessing feasibility of migrating customer solutions and/or integrating with 3rd party systems both Microsoft and non-Microsoft platforms
• Designing and developing Data Pipelines for Data Ingestion or Transformation using Python (PySpark)/Spark SQL
• Work with version control GitHub and CI/CD pipelines using Azure DevOps
If you thrive in a dynamic, collaborative workplace, IBM provides an environment where you will be challenged and inspired every single day. And if you relish the freedom to bring creative, thoughtful solutions to the table, there's no limit to what you can accomplish here.
Required Technical and Professional Expertise
• Total Years of Experience 5+ years in IT industry
• 2+ years current and deep experience with Azure Data platform implementation. Proven experience managing projects through the entire project lifecycle.
• Relevant years of experience 2+ years in Azure Data Factory, Azure Data Lake, Azure DevOps, Azure DataBricks, AzureSQL
• Proficient in Azure Data Integration, Azure Data Architecture
• Proven experience using the Microsoft Azure Data Stack (ADFv2, Azure SQL DB, Azure SQL Datawarehouse, Azure Data Lake, Azure Databricks, Analysis Services, Cosmos DB)
• Experience in Azure data migration patterns
• Experience in Python, C# is mandatory
• Agile methodology experience essential
• 3+ years’ experience assessing feasibility of migrating customer solutions and/or integrating with 3rd party systems both Microsoft and non-Microsoft platforms
• Hands on experience Designing and developing Data Pipelines for Data Ingestion or Transformation using Python (PySpark)/Spark SQL
Preferred Technical and Professional Expertise
• AZ 900 - Azure Fundamentals
• DP 200, DP 201, DP 203, AZ 204 - Data Engineering
• AZ 400 - Devops Certification
• You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
• Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work
• Intuitive individual with an ability to manage change and proven time management
• Proven interpersonal skills while contributing to team effort by accomplishing related results as needed
• Up-to-date technical knowledge by attending educational workshops, reviewing publications
• Experience working with version control GitHub and CI/CD pipelines using Azure DevOpsRead more