Senior Lead Data Engineer, GQRC Analytics (HYBRID)

Michigan, Portage 1901 Romence Rd Pkwy

Work Flexibility: Hybrid

Position Summary:

As the Senior Lead Data Engineer within the Global Quality, Regulatory & Clinical (GQRC) Analytics team you will lead the design and maintenance of the data infrastructure used by the analytics arm of the team to create solutions used to empower our leaders to make data-driven decisions. Key data sources are eSystems used within the Quality Management System such as Trackwise, onePLM, SLMS, Quickbase and QARAD.  Primary tools used by the team include Azure Data Factory, Databricks, Azure Dev Ops and Power Platform.

Requires a deep knowledge of data engineering and adjacent cloud technologies to apply to solving business problems. This role will be responsible for understanding business needs, translating them into data engineering solutions, and taking on a leadership role in the execution of various data engineering projects. Self-managed execution of end-to-end data engineering projects from requirements gathering and documentation to ETL/ELT build and deployment and pipeline optimization. Architecture and cloud engineering competency in collaboration with others on the team.

Who We Want

  • Analytical problem solvers. People who go beyond just fixing to identify root causes, evaluate optimal solutions, and recommend comprehensive upgrades to prevent future issues.

  • Dedicated achievers. Relentless about quality, people who thrive in a fast-paced environment and will stop at nothing to ensure a project is complete and meets regulations and expectations.

  • Goal-oriented developers. Keeping the customer and system requirements squarely in focus, people who deliver safe and robust solutions.

What You Will Do

  • Provide technical and project leadership in the team's day-to-day project portfolio.

  • Experience with scrum and agile frameworks expected.

  • Demonstrate financial acumen to develop financial impacts for existing projects and evaluating new opportunities.

  • Orchestrate collaboration with Stryker enterprise across functions to leverage domain expertise and capabilities.

  • Independently build requirements gathering documentation, needs assessments, and development/maintenance of technical documentation for key systems and data assets.

  • Partner with enterprise-wide business and GQRC senior management to understand and prioritize data and information requirements and translate customer/business requirements into technical requirements.

  • Ability to manage expectations on timing and scope of project work.

  • Creative ability to identify opportunities and lead discussions in data architecture and movement to enable business opportunities with key stakeholders.

  • Mentor and coach data engineering colleagues in developing their skills in the same.

  • Experienced/advanced in identifying the most appropriate data engineering methods and tools for a particular use case and project.

  • Orchestrate presentations and communications through effective conveying of complex topics up to the leadership level.

  • Lead and mentor others in root cause/problem solving efforts.

  • Collaborate with data owners to improve data quality and streamline data preparation.

  • Lead the development of the data engineering architecture for GQRC (eg. databases, servers, data models and pipelines) investigating and implement new methods to optimize it.

  • Manage the implementation, organization, and administration of the various data platforms (e.g. Azure, Microsoft Fabric, Databricks).

  • Manage the data model refresh schedules for the various data systems to balance the needs of the business while maintaining optimal system performance.

  • Support the development of the strategic vision for data analytics, with focus on data engineering.

  • Maintain awareness of industry best practices and help implement the desired changes and improvements where appropriate.

  • Develop relationships with relevant members of the Information Technology (IT) department to extract, transform, and load data from multiple disparate data sources.

  •  leverage expertise and foster collaboration.

  • Collaborate, participate, and represent the GQRC analytics team for events and meetings.

What You Need (Requirements)

  • Bachelor’s degree in computer science, data analytics, mathematics, statistics, data science, or related field with applicable data engineering & architecture work experience.

  • 6+ years of work experience required.

  • Master's Degree or PhD in Computer Science or quantitative discipline preferred.

What We Would Like (Preferred)

  • Proven record of managing end-to-end data engineering projects: from problem and requirements definition to architecture validation and deployment.

  • Proficient in object-oriented programming and data structures.

  • Experienced/advanced in at least one programming language central to Data Engineering (e.g. SQL/Python/R/Spark) or skilled in multiple languages.

  • Experienced/advanced in optimizing workflows, pipelines, and algorithms.

  • Experienced/advanced in ETL/ELT, pipeline creation, orchestration, & data store/warehouse/systems architecture.

  • Experienced/advanced working knowledge of cloud-based native tools such as data storage, distributed computing, business intelligence, and infrastructure as code (Power BI, Apache Spark, Azure, etc.).

  • Experience with DataOps, DevOps, and/or SecOps desired.

  • Experience with implementing/managing Agile/DevOps using GitHub/GitLab for versioning control, and infrastructure as code desired.

  • Experience with verification/validation of data-enabled tools within a regulatory environment desired.

Percentage of travel required: 10%

Travel Percentage: 10%

Stryker Corporation is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, gender identity, sexual orientation, national origin, disability, or protected veteran status. Stryker is an EO employer – M/F/Veteran/Disability.

Stryker Corporation will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor’s legal duty to furnish information.

Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Agile Architecture Azure Business Intelligence Computer Science Data Analytics Databricks DataOps Data quality DevOps ELT Engineering ETL GitHub GitLab Mathematics OOP PhD Pipelines Power BI Python R Scrum Spark SQL Statistics

Perks/benefits: Team events

Regions: Europe North America
Countries: Italy United States
Job stats:  3  0  0

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.