台北市職務說明 / Key ResponsibilitiesWe are looking for a savvy Data Engineer to join our growing team of analytics experts. He / She will be responsible for expanding, optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-
functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-
designing our company’s data architecture to support our next generation of products and data initiatives.Create and maintain optimal data pipeline architecture,Assemble complex data sets that meet functional / non-
functional business requirements.Identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, re-
designing infrastructure for greater scalability, etc.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
Support analytics experts to utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Create tools for analytics and data scientist team members that assist them in building and optimizing our data product, such as AI and Machine Learning solution.
Work with stakeholders including the Executive, BD, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
需求條件 / Key RequirementsExperienceExperience with relational SQL and NoSQL databases, such as Postgres.Experience building and optimizing data pipelines, architectures and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.Experience supporting and working with cross-
functional teams.SkillsExperience with relational SQL and NoSQL databases, such as Postgres.Experience with object-oriented / object function scripting languages, such as Python.
Experience with AWS cloud services : EC2, EMR, RDS, Redshift is a plus.Experience with data pipeline and workflow management tools : Azkaban, Luigi, Airflow is a plusExperience with big data tools : Hadoop, Spark, Kafka, is a plus.
Experience with stream-processing systems : Storm, Spark-Streaming, is a plus.Benefits : What would you like to have for Benefits?
Join us! We can work it out together, and will level up for sure!Currently, we have : Lots of Wines, beers, snacks for you.
Don't worry you will get hungry..Flexible work hours. You don't have to be trapped in the traffic jam.Free and full of interesting work environment.
Let's have fun in work!!Unlimited floating holidays. We work hard, and play hard!!