Job Describtion
Develops and maintains scalable data pipelines and builds out data services to support continuing increases in data volume and complexity.
Collaborate with the team to design DB / Table Schema and Data Schema (Dimensional Data Modeling or NoSQL Schema experience is a plus)
Familiar with Big Data framework, ex : Hadoop, Apache Spark ...etc.
Experience with Big Data Engineering processes (using Agile, Hadoop, HDFS, Spark ...etc.), experiences with AWS / AZURE / GCP big data services will be a plus.
Ability to communicate complex data in a simple, actionable way
Type1 : (Data ETL Engineer)
1)Data ETL processing
2)Create and maintain optimal Data Pipeline Processing.
Type 2 : (Data Platform Architect)
1)Data platform architecture design and deployment like Hadoop Framework or Data Warehouse(Column-based).
2)Designs data integrations and data quality framework.
Requirements
Data Engineer 2 3 years of related working experience
Data Architect 4+ years of related working experience
Computer Science / Information Engineering / Information Management or related technical field
Experience with Big Data Technologies : Hadoop, Spark, Kafka, etc.
Experience with handling Structured Data / Semi-Structured Data / Unstructured Data / Streaming Data is preferred.
Experience with Data pipeline and workflow management tools : Nifi ,Azkaban, etc.
Familiarity with the Linux OS environment.