Ascend Innovations

Overview

Ascend is a socially impactful, mission driven technology company that provides data-driven products and consulting services to help organizations solve complex community health problems. Formed by the Greater Dayton Area Hospital Association and three major hospital networks (Premier Health, Kettering Health, and Dayton Children’s), Ascend is positioned to serve all communities to address their most complex health problems. The role of data engineer is fundamental to Ascend's data science capability. Data engineers are responsible for the mechanics of data ingestion, processing, and storage.

Our Culture

We believe that life does not equal work. We believe in being direct with positive intent and leading each interaction with empathy. We are intentional about giving first. We kindly poke the bear. We laugh (A LOT). We believe that teamwork makes the dream work. And most of all, we believe that our differences make us stronger.

Job Summary

The Data Engineer is responsible for managing data access across Ascend’s projects, products, and throughout a product's lifecycle. Data Engineers are mainly tasked with transforming data into a format that can be easily analyzed. A data engineer has the responsibility to design, build and manage the information captured by Ascend and is entrusted with establishing and maintaining a data handling infrastructure to analyze and process data in line with the company's goals. Data engineers work closely with data scientists and are largely in charge of architecting solutions for that enable them to do their jobs.

The Data Engineer will also work with our software developers, data analysts and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects. Additionally, they are also responsible working closely with clients and other business leaders to translate raw data into actionable insights which would result in competitive edge for Ascend. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, clients, and products.

Primary Responsibilities

Data Engineering

· Ability to create and maintain optimal data pipeline architecture.

· Assemble large, complex data sets that meet functional / non-functional business requirements.

· Support data engineering efforts, including database and API design, data extraction/transformation/load, and data aggregation/integration.

· Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

· Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

· Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.

· Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

· Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

· Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

· Work with data and analytics experts to strive for greater functionality in our data systems.

· Supports requirements gathering, discovery, or strategic consulting scenarios.

Communication

· Provide project updates on a consistent basis to various stakeholders

· Report and escalate to management as needed

· Work with stakeholders to help define project scope and clarify project requirements

Qualifications

· 5+ years of experience in a Data Engineer or Developr role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using any of the following software/tools:

o Linux/Unix-based operating systems and experience in shell scripting required

o Big data tools: Hadoop, Spark, Kafka, etc.

o Relational SQL and NoSQL databases, including Postgres and MongoDB.

o Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

o AWS cloud services: EC2, EMR, RDS, Redshi

o Stream-processing systems: Storm, Spark-Streaming, etc.

o Scripting languages: Python, Java, C++, Scala, etc.

· Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

· Experience building and optimizing "big data" data pipelines, architectures, and data sets.

· Experience in designing RESTful APIs, architecting robust and scalable systems, and deploying and maintaining web services, including web server configuration (e.g., Apache, NGINX), message queues (e.g., RabbitMQ, Apache Kafka), microservice architectures, proxy servers, sidecar patterns.

· Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

· Strong analytic skills related to working with unstructured datasets.

· Build processes supporting data transformation, data structures, metadata, dependency, and workload management.

· A successful history of manipulating, processing, and extracting value from large, disconnected datasets.

· Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.

· Strong project management and organizational skills.

· Experience supporting and working with cross-functional teams in a dynamic environment.

· Willingness to learn or adopt new tools and processes as the team grows and evolves.

· Extremely attentive to detail

· Excellent organizational skills

· Team/collaboration oriented

· Demonstrated leadership and self-direction

What We Can Offer

· Flexible time off

· The ability to work remotely

· A competitive salary

· 401k with automatic company contribution

· A choice of flexible career pathways

· Flexible technology stacks and equipment

· Professional development opportunities