Big Data Hadoop with data warehouse

About the Employer
Annual Salary
Not disclosed

Job Description

Job Title Big Data Hadoop with data warehouse environment Location Columbus, Ohio. Interviews teams with video Will Close to Submissions on Monday 215

5 Pm EST REQUIRED Skill

8+ years Business Data Analysis experience in Waterfall and Agile Methodology in various domains (prefer Healthcare) in a data warehouse environment.

Good knowledge of relational database, Hadoop big data platform and tools, data vault and dimensional model data analysis and design.

5+ yearsrsquo experience (prefer Oracle, Hive and Impala) in creating DDLrsquos and DMLrsquos in Oracle, Hive and Impala preferred.

3+ yearsrsquo experience in analysis, design, development, support and enhancements in data warehouse environment with Cloudera Bigdata Technologies(Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, Stream Sets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentry, Cloudera Navigator) along with Informatica.

3+ yearsrsquo experience in migrating data from relational database... (prefer Oracle) to big data ndash Hadoop platform is a plus.

8+ yearsrsquo experience eliciting, analyzing and documenting functional, non-functional and data requirements.

Ability to document business, functional, non-functional and data requirements, meeting minutes, and key decisionsactions.

5+ yearsrsquo experience in identifying data anomalies.

5+ yearsrsquo experience in building data sets and familiarity with PHI and PII data.

Ability to establish priorities follow through on projects, paying close attention to detail with minimal supervision.

Effective communication, presentation, organizational skills.

8+ yearsrsquo experience in working with Visio, Excel, PowerPoint, Word, etc.

Effective team player in a fast paced and quick delivery environment.

Required Education BSBA degree or combination of education experience DESIRED Skill

Demonstrate effective leadership, analytical and problem-solving skills.

Required excellent written and oral communication skills with technical and business teams.

Ability to work independently, as well as part of a team

Stay abreast of current technologies in area of IT assigned

Establish facts and draw valid conclusions

Recognize patterns and opportunities for improvement throughout the entire organization.

Ability to discern critical from minor problems and innovate new solutions

The Business Analyst will be responsible for Medicaid Enterprise data warehouse design, development, implementation, migration, maintenance and operational activities. The candidate will closely with Data Governance and Analytics team. Will be one of the key businesstechnical resource for data warehouse projects for various Enterprise data warehouse projects and building critical data marts, data ingestion to Big Data platform for data analytics and exchange with State and Medicaid partners. This position is a member of Medicaid ITS and works closely with the Business Intelligence

Data Analytics team. Responsibilities

Participate in Team activities, Design discussions, Stand up meetings and planning Review with team.

Eliciting, analyzing and documenting functional, non-functional and data requirements.

Document business requirements, meeting minutes, and key decisionsactions.

Perform business data analysis, data profiling, data cleansing and data quality analysis in various layers using Database queries both in Oracle and Big Data platforms.

Lead client meetings and sessions with data-driven analysis to clarify requirements and design decisions.

Perform data gap and impact analysis due to new data addition and existing data changes for any new business requirements and enhancements.

Follow the organization design standards document, create data mapping specification document, pseudo codes for the development team(s) and design documents.

Create logical physical data models.

Review and understand existing business logic used in Oracle and Hadoop ETL platforms to verify against the business user needs.

Review PySpark programs that are used to ingest historical and incremental data.

Review SQOOP scripts to ingest historical data from Oracle database to Hadoop IOP, created HIVE tables and Impala view creation scripts for Dimension tables.

Assist Data Analyst to create Test Plan, Design Test scenarios, SQL scripts (prefer Oracle and Hadoop), test or mockup data, executes the test scripts.

Validate test results and records as well as log and research defects.

Analyze production data issues, report problems and find solutions to fix the issues, if any.

Create incidents and tickets to fix production issues, create Support Requests to deploy code for development team to UAT environment.

Participate in meetings to continuously upgrade the Functional and technical expertise.

Establish priorities follow through on projects, paying close attention to detail with minimal supervision.

Create and present project plan, project status and other dashboards as necessary.

Perform other duties as assigned.

Regards, HR

Recruitment Lead EDI Matrix LLC

EmailRead more

Page Generated in : 0.00015711784362793 Sec.