Bigdata Developer

About the Employer
Annual Salary
Not disclosed

Job Description

Bigdata Developer

Location: Hyderabad

Experience: 6-10 years

The Company

Hitachi Vantara, a wholly-owned subsidiary of Hitachi, Ltd., guides our customers from what's now to what's next by solving their digital challenges. Working alongside each customer, we apply our unmatched industrial and digital capabilities to their data and applications to benefit both business and society. More than 80% of the Fortune 100 trust Hitachi Vantara to help them develop new revenue streams, unlock... competitive advantages, lower costs, enhance customer experiences, and deliver social and environmental value.

The Role

Hitachi Vantara is seeking to hire an experienced Software Engineer to join our Catalog Engineering Team.

Responsibilities

Hitachi Vantara Catalog Engineering team plays a critical role in developing next-generation technologies that automate the management of data, information, and creation of knowledge. Our products need to handle information at massive scale across banking, capital markets, retail, energy, and healthcare.

We are looking for developers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking, security, artificial intelligence, machine learning and natural language processing.

You will work in a truly international distributed team with main hubs located across three continents: United States, Europe and Asia.

REQUIRED SKILLS
• 6+ years of experience designing and building successful customer facing products.
• Bachelor's and/or Master's degree, preferably in Computer Science, Software Engineering, Math, Statistics or equivalent quantitative disciplines.
• Proficient in at least one of the strongly typed programming languages such as Java, Kotlin etc.
• Exposure to NoSQL systems
• Experience with data modelling and structural modelling in Cassandra or Dremio or Presto with hands on experience (live projects)
• Exposure to Distributed computing and large data processing
• Integrated with solutions consuming Presto, Dremio, Cassandra as data sources - Query optimizations, reading at scale.
• Exposure to Data access security with multiple systems

Nice to have:
• Dynamo DB experience
• Cassandra capacity planning, installation and deployment in cloud/infra
• Exposure to cloud systems
• Exposure to hadoop distributions
• Responsibilities: (include but not limited to)
• Design, develop and optimize data consumption from Cassandra or Presto or Dremio cluster
• Optimize data reads and processing from the above data systems
• Capacity planning
• Planning and executing data access security
• Monitoring
• Troubleshooting
• Data access modeling, design - implementation
• Establishing and driving standards within the team
• Conducting knowledge transfer sessions and training

Qualifications

Any Engineering Graduate

We are an equal opportunity employer. All applicants will be considered for employment without attention to age, race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability statusRead more

Page Generated in : 0.00014400482177734 Sec.