Nordstorm – Senior Data Engineer (Job ID-328166) 209 views

NOTE: Please go through the job description and if you feel like a suitable candidate, then you can send your resumes directly to brother OWAIS MALIK ([email protected]); he will forward it to the hiring manager if it is a match.

Location: Seattle, Washington state, USA.

The Senior Data Engineer will be part of a key team of Nordstrom Technology professionals that applies scientific, mathematical and social principles to design, build, and maintain technology products, devices, systems, and solutions. These technology products and solutions provide amazing customer experiences while meeting the needs of the business.
More Specifically you will be part of the team building real-time enterprise data streams ingesting data from multiple heterogeneous data sources and provisioning data as raw data streams and Canonical business events. Essentially provisioning data once at the enterprise level for any enterprise level data related use cases. Deploying and maintaining stream processing technologies to provide data to real-time Operational data stores, raw data stores, providing frameworks on how to transform raw data using big data technologies and helping data scientist in machine learning deliverables.

Job Description:
-Responsible for building real-time streaming applications that enable streaming analytics, Business intelligence analytical and operational reporting using modern open source big data technologies.
-Ingest data from heterogeneous data sources and publish them as enterprise business events in a Kafka stream.
-Building raw data stores and data processing capabilities using big data technologies.
-Responsible for building and consuming from API’s.
-Building Scalable Data Pipelines for generating training datasets for machine learning deliverables.
-Mentor Junior resources and drive end to end design, implementation and delivery of engineering components.
-Experience in Deploying, maintaining and building apps on distributed stream processing engines such as Storm, Flink, Nifi, Spark etc.
-Experience building data transformation layers, ETL frameworks using big data technologies such as Hive, Spark, Presto etc.
-Experience working with container management technologies such as Docker, Kubernetes.
-Strive for continuous improvement of code quality and development practices
-Willingness to adapt to and self-learn new technologies and deliver on them.
-Building and maintaining solutions on highly available environments.
-Working knowledge of CI/CD.
-Working knowledge of building telemetry and data integrity checks as part of the delivery of applications.
-Technical expertise to build code that is performant as well as secure.
-Technical depth and vision to perform POC’s and evaluate different technologies.
-Translate business issues to technical terms.
Understand, leverage and applies best practices effectively. Also leads by example and comes up with coding standards and best practices for technology.
-Collaborate with cross-functional teams – business stakeholders, engineers, program management, project management, etc. – to produce the best solutions possible.
-Anticipate system/application challenges and proposes solutions for the same.
-Contribute to story sizing and work estimates for implementation, validation, delivery, and documentation.
-Review user stories to ensure a quality user experience, well-defined acceptance criteria, and thorough test coverage.
-Participate in design and code review to ensure the quality and testability of feature code.
-Implement test automation to validate the new and existing code.
-Adjust positively to quickly-changing priorities and shifting goals.

You own this if you…
-5-7 years of professional experience in the practice area.
-Proven high level of expertise in Java and related technology stacks.
-Cloud Computing Experience (e.g AWS, Azure) is a plus.
-Good Working knowledge of Apache Kafka.
-Good working knowledge of containerized technologies such as Docker, Kubernetes.
-Working knowledge of big data technologies such as Apache Strom, Flink, Nifi, Spark, Presto, Elastic Search, DynamoDB and other relational data stores.
-Experience Building data warehouses/data marts a plus.
-Past experience building a data lake on Hadoop/AWS a plus.
-Proven proficiency in API development (REST).
-BS or MS in Computer Science or equivalent.
-Agile software development experience.
-Experience working with Elasticsearch, NoSQL data stores is a plus.
-Experience working in a metrics driven environment is a plus.
-Prior experience with machine learning algorithm implementation is a plus.

We offer a comprehensive benefits package that includes medical, vision and dental coverage, a fabulous merchandise discount, an employer-matched 401(k) plan, employee stock purchase plan and much more depending on your role.

We are an equal opportunity employer committed to providing a diverse environment.

This job description is intended to describe the general nature of the work employees can expect within this particular job classification. It is certainly not a comprehensive inventory of all duties, responsibilities and qualifications required for this job.

Job Technology
Date Posted 01/14/2018, 11:47:12 PM

NOTE: Please go through the job description and if you feel like a suitable candidate, then you can send your resumes directly to brother OWAIS MALIK ([email protected]); he will forward it to the hiring manager if it is a match.

  • This job has expired!
Share this job