Do you believe that data provides game-changing insight? Do you see data as an asset that creates a competitive advantage? Great so do we.
Micron Technology operates in a highly competitive industry where innovation depends on talented minds extracting fresh insights from an ever-
expanding data universe. As you already know, this can only happen when quality data is available at the right time and in the right format.
Our expert team of big data software engineers is dedicated to making this happen. We operate in a diverse, collaborative environment where problem solving is a team sport and creative solutions are recognized and rewarded.
Does this sound like the right team for you? Good news. We’re hiring!
As a Big Data Software Engineer at Micron Technology Inc., you are a key member of a cross-functional team responsible for developing and growing Micron’s methods and systems for extracting new insights from our expanding data streams.
You will collaborate with various roles and teams across the enterprise to design and implement systems to extract data and insights from Micron’s manufacturing, R&D and engineering systems, transforming it into an actionable format.
Responsibilities and Tasks
Understand the Business Problem and the Data that is Relevant to the Problem
Develop an intimate understanding of company and department strategy
Translate data analysis needs into solution requirements
Identify and understand the data sources that are relevant for analysis
Become a subject matter expert in company data analysis platforms
Architect Data Analysis Systems
Blueprint and build flexible data pipeline / software and other tools to support massive analytic loads on MPP systems (Hadoop, Spark, HBase, Teradata)
Design big data engineering & solutions for resiliency and integration with transactional systems
Develop solutions that are Fast, Flexible and Friendly for a highly technical user base
Identify and select the optimum tools and methods to build data solutions for continuous deployment in an agile environment
Partner closely with engineers and data scientists to understand solution work flows and interaction expectations
Design for Operations
Instrument solutions for continuous monitoring from system and user view points
Utilize design patterns like store and forward to ensure robust uptime and recovery
Eliminate single points of failure and plan for linear scalability
Utilize open restful interfaces for remote operations integration
Based in Taichung
B.S. degree and above in Computer Science or related field(Information Engineering, Information Management, Software Engineering)
Proficiency with development tools (at least one from list : Git / SVN, Artifactory, Maven, Jenkins)
2+ years developing with Java, Python, Scala
2+ years experience with Hadoop ecosystem (like Spark, Hive, HBase, Pig)
Experience using data streaming stacks (like Ni-Fi, Kafka, Spark Streaming, Storm)
Experience with relational databases (like Teradata, MS SQL, Oracle, MySQL)
Experience with R / Shiny, MLlib, SciPy, NumPy, etc
Excellent communication and problem solving skills
Good communication skills in English or TOEIC 650 and above
Business travel is expected and sometimes to co-operate with partner / customer in different time zone (USA, India etc) is needed.
Based in Taichung
Relocation Level : No